Spelling suggestions: "subject:"datorsystem"" "subject:"aktorsystem""
311 |
Automating regression testing on fat clients / Automatiska regressionstester av feta klienterÖsterberg, Emil January 2020 (has links)
Regression testing is important but time consuming. Automating the testing have many benefits. It will save money for companies because they will not have to pay testers to manually test their applications. It will produce better software with less bugs as testing can be done more frequently so bugs will be found faster. This thesis has compared two techniques used to automate regression testing. The more traditional component level record and replay tools and Visual GUI testing tools that uses image recognition. Eight tools in total was tested and compared, four from each technique. The system under test for this project was a fat client application used by Trafikverket. After automating a test suite using all tools, it could be concluded that the component level record and replay had some advantages over visual GUI testing tools, especially when it comes to verifying the state of the system under test. The benefits of visual GUI testing tools comes from their independence from the system under test and that the technique more correctly mimics how a real user interacts with the GUI. / Regressionstestning är en viktig men tidskrävande del av mjukvaruutveckling. Att automatisera testningen har flera fördelar. Det sparar pengar för företag eftersom de inte behöver betala testare för att manuellt utföra testerna. Det resulterar i bättre mjukvara med färre buggar eftersom man kan testa oftare och därmed hitta buggar tidigare. Det här projektet har undersökt och jämfört två tekniker som kan användas för att automatisera regressionstestning och verktyg som använder dessa tekniker. Dels de traditionella verktygen som identifierar objekt på komponentnivå samt verktyg som istället använder sig av bildigenkänning för att identifiera objekt. Totalt testades och utvärderades åtta verktyg, fyra av varje tekniktyp. Systemet som testades under projektet är en skrivbordsapplikation som används av Trafikverket. Efter att ha automatiserat en testsekvens med varje verktyg kunde konstateras att verktygen som identifierar objekt på komponentnivå har flera fördelar över verktyg som enbart använder bildigenkänning. Detta gäller främst när det kommer till verifiering av systemets tillstånd. Den största fördelen med bildigenkänningsverktygen visade sig vara dess oberoende från systemet, samt att tekniken mer efterliknar en verklig användare.
|
312 |
En Jämförelse av För- och Nackdelar med VR och Tvådimensionell Visualisering inom Testning och Verifikation av Autonoma Fordonsfunktioner / A comparison between VR and Two Dimensional Visualization within Testing and Verification of Autonomous Functions in Vehicles.Eriksson, Eivind, Aronsson, Alfred January 2021 (has links)
The goal of this study is to compare the differences between Virtual Reality with a head mounted display and traditional, 2-dimensional monitors when used in simulation and testing. By conducting this comparison, the aim was to find out whether a visualization method that uses 3-dimensions as opposed to a method which only uses 2-dimensions, provides a better solution for viewing 3-dimensional data gathered from actual vehicles in real-world tests. The approach to investigate this is by developing a tool that can visualize data from a dataset in both 2 and 3-dimensions. The tool was then evaluated by allowing a group of participants, whose work is related to testing within the automotive industry, perform tasks using the tool and timing the results. In addition to the timed results, the participants were also asked to answer a questionnaire with the aim of providing a more detailed look into how they experienced the two methods of visualisation. Overall, the impression of using VR as a method of visualization was by all participants positive and it was concluded that the participants performed marginally better with VR but at the cost of slight discomfort and some fatigue. However, the results could be accurately improved as the precision of the gathered data was found limited by the method used to collect it in the experiment.
|
313 |
Improving Soft Real-time Performance of Fog ComputingStruhar, Vaclav January 2021 (has links)
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication. In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
|
314 |
Evaluating the personalisation potential in local news / En utvärdering av personaliseringspotentialen i lokala nyheterAngström, Fredrik, Faber, Petra January 2021 (has links)
Personalisation of content is a frequently used technique intended to improve user engagement and provide more value to users. Systems designed to provide recommendations to users are called recommender systems and are used in many different industries. This study evaluates the potential of personalisation in a media group primarily publishing local news, and studies how information stored by the group may be used for recommending content. Specifically, the study focuses primarily on content-based filtering by article tags and user grouping by demographics. This study first analyses the data stored by a media group to evaluate what information, data structures, and trends have potential use in recommender systems. These insights are then applied in the implementation of recommender systems, leveraging that data to perform personalised recommendations. When evaluating the performance of these recommender systems, it was found that tag-based content selection and demographic grouping each contribute to accurately recommending content, but that neither method is sufficient for providing fully accurate recommendations.
|
315 |
Serverrum / SD-AccessDrugge Lundström, Eric, Agmalm, Kristoffer January 2021 (has links)
Vi fick i uppdrag av ATEA att sätta upp två stycken serverrum på Northvolt Ett i Skellefteå i samarbete med Nätverksteamet på Northvolt, samt installera distributionswitchar ute i fabriken som sedan ska användas för att koppla fabrikens maskiner till nätverket. Denna rapport handlar om Cisco DNA, SD-ACCESS och ISE som är tre moderniseringar av hur ett nätverk kan byggas upp. Rapporten kommer först att gå igenom teorin för att förstå vad som var målet för Cisco DNA, ISE och SD-ACCESS och varför det är bra att använda i ett modernt nätverk. Rapporten kommer gå igenom hur vi gick tillväga för att lära oss hur det fungerar, och vilket praktiskt arbete som utfördes för att fysiskt sätta upp två serverrum på Northvolt Ett.
|
316 |
Automating regression testing on fat cientsÖsterberg, Emil January 2020 (has links)
Regression testing is important but time consuming. Automating the testing have many benefits. It will save money for companies because they will not have to pay testers to manually test their applications. It will produce better software with less bugs as testing can be done more frequently so bugs will be found faster. This thesis has compared two techniques used to automate regression testing. The more traditional component level record and replay tools and Visual GUI testing tools that uses image recognition. Eight tools in total was tested and compared, four from each technique. The system under test for this project was a fat client application used by Trafikverket. After automating a test suite using all tools, it could be concluded that the component level record and replay had some advantages over visual GUI testing tools, especially when it comes to verifying the state of the system under test. The benefits of visual GUI testing tools comes from their independence from the system under test and that the technique more correctly mimics how a real user interacts with the GUI. / Regressionstestning är en viktig men tidskrävande del av mjukvaruutveckling. Att automatisera testningen har flera fördelar. Det sparar pengar för företag eftersom de inte behöver betala testare för att manuellt utföra testerna. Det resulterar i bättre mjukvara med färre buggar eftersom man kan testa oftare och därmed hitta buggar tidigare. Det här projektet har undersökt och jämfört två tekniker som kan användas för att automatisera regressionstestning och verktyg som använder dessa tekniker. Dels de traditionella verktygen som identifierar objekt på komponentnivå samt verktyg som istället använder sig av bildigenkänningför att identifiera objekt. Totalt testades och uvärderades åtta verktyg, fyra av varje tekniktyp. Systemet som testades under projektet är en skrivbordsapplikation som används av Trafikverket. After att ha automatiserat en testsekvens med varje verktyg kunde konstateras att verktygen som identifierar objekt på komponentnivå har flera fördelar över verktyg som enbart använder bildigenkänning. Detta gäller främst när det kommer till verifiering av systemets tillstånd. Den största fördelen med bildigenkänningsverktygen visade sig vara dess oberoende från systemet, samt att tekniken mer efterliknar en verklig användare.
|
317 |
Förbättring av WLAN-kvaliteten i Skellefteå kommunsverksamheterBoqvist, Anna, Aryal, Elisha January 2021 (has links)
No description available.
|
318 |
The Challenges in Leveraging Cyber Threat Intelligence / Utmaningarna med att bemöta cyberhot motunderrättelseinformationGupta, Shikha, Joseph, Shijo, Sasidharan, Deepu January 2021 (has links)
Today cyber attacks, incidents, threats, and breaches continue to rise in scale and numbers, as sophisticated attackers continuously break through conventional safeguards each day. Whether strategic, operational, or tactical, threat intelligence can be defined as aggregated information and analytics that feed the different pillars of any given company’s cybersecurity infrastructure. It provides numerous benefits, enabling improved prediction and detection of threats, empowering and informing organizations to make better decisions during as well as following any cyber attack and aiding them to develop a proactive cyber security posture. It helps provide actionable intelligence, which equips senior management to make timely actions and decisions that might otherwise have an impact on the company’s ability to keep ahead and defend against this growing sea of threats. Driving momentum in this area also helps reduce their reaction times, enabling a shift for organizations to become more proactive than reactive. Perimeter defenses seem to no longer suffice as threats are becoming more complex and escalating with no best practices and guidelines available for companies to follow after, during, or before the time of the threat and risk due to the multiple components involved, including the various standards and platforms. Sharing and analyzing threat data effectively requires standard formats, protocols, shared understanding of the relevant terminology, purpose, and representation. Threat intelligence and its analysis are seen as a vital component of cyber security and a tool that many companies cannot leverage and utilize fully. Securing today's organizations and businesses, therefore, will require a new approach. In our study with security executives working across multiple industries, we have identified the various challenges that prevent the successful adoption of threat intelligence and with the rising adoption of the multiple platforms, including issues related to data quality, absence of universal standard format and protocol, challenge enforcing data sharing based on CTI data attribute, lack of authentication and confidentiality preventing data sharing, missing API integration capability in conjunction with multi-vendor tools, lack of identification of tacticalIOCs, failure to define TTL value(s), lack of deep automation, analytical and visualization capabilities. Ensuring the right expertise and capabilities in these identified areas will help leverage threat intelligence effectively, help to sharpen the focus, and provide the needed competitive edge.
|
319 |
Predicting the Financial Impact of the CEO’s Comments in Quarterly Reports / Att förutspå den finansiella påverkan av VD-ordeti kvartalsrapporterWesterdahl, Ludvig January 2020 (has links)
This thesis investigates how the CEO’s comments in quarterly reports affect the financial performance of a company by predicting their stock price using machine learning and natural language processing. The dataset used consists of historical information such as the stock price (quantitative data) and the CEO’s comments (qualitative data). Where the qualitative information was embedded using the paragraph vector document embedding technique and used with the quantitative data in three type of models. The models tested was Support Vector Machine and Artificial Neural Network against a Naive Bayes base- line. Further, each model was trained and evaluated using the quantitative, qualitative and both datasets and the results were confirmed using statistical significant testing. Finally, the best models from the evaluation step were used to simulate a trading strategy to buy the stock if the model predicted that the price of the stock would rise. The statistically significant improvements of using the CEO’s comments and the hypothetical profits the trading strategies rendered show that the CEO’s comments adds some predictive ability in terms of their stock price and thus their financial performance.
|
320 |
AI-based Age Estimation using X-ray Hand Images : A comparison of Object Detection and Deep Learning modelsWesterberg, Erik January 2020 (has links)
Bone age assessment can be useful in a variety of ways. It can help pediatricians predict growth, puberty entrance, identify diseases, and assess if a person lacking proper identification is a minor or not. It is a time-consuming process that is also prone to intra-observer variation, which can cause problems in many ways. This thesis attempts to improve and speed up bone age assessments by using different object detection methods to detect and segment bones anatomically important for the assessment and using these segmented bones to train deep learning models to predict bone age. A dataset consisting of 12811 X-ray hand images of persons ranging from infant age to 19 years of age was used. In the first research question, we compared the performance of three state-of-the-art object detection models: Mask R-CNN, Yolo, and RetinaNet. We chose the best performing model, Yolo, to segment all the growth plates in the phalanges of the dataset. We proceeded to train four different pre-trained models: Xception, InceptionV3, VGG19, and ResNet152, using both the segmented and unsegmented dataset and compared the performance. We achieved good results using both the unsegmented and segmented dataset, although the performance was slightly better using the unsegmented dataset. The analysis suggests that we might be able to achieve a higher accuracy using the segmented dataset by adding the detection of growth plates from the carpal bones, epiphysis, and the diaphysis. The best performing model was Xception, which achieved a mean average error of 1.007 years using the unsegmented dataset and 1.193 years using the segmented dataset. / <p>Presentationen gjordes online via Zoom. </p>
|
Page generated in 0.0613 seconds