• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 353
  • 185
  • Tagged with
  • 540
  • 540
  • 540
  • 540
  • 540
  • 540
  • 540
  • 69
  • 54
  • 51
  • 46
  • 45
  • 33
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Soil moisture modelling using TWI and satellite imagery in the Stockholm region

Haas, Jan January 2010 (has links)
Soil moisture is an important element in hydrological land-surface processes as well as land atmosphere interactions and has proven useful in numerous agronomical, climatological and meteorological studies. Since hydrological soil moisture estimates are usually point-based measurements at a specific site and time, spatial and temporal dynamics of soil moisture are difficult to capture. Soil moisture retrieval techniques in remote sensing present possibilities to overcome the abovementioned limitations by continuously providing distributed soil moisture data atdifferent scales and varying temporal resolutions. The main purpose of this study is to derive soil moisture estimates for the Stockholm region by means of two different approaches from a hydrological and a remote sensing point of view and the comparison of both methods. Soil moisture is both modelled with the Topographic Wetness Index (TWI) based on digital elevation data and with the Temperature‐Vegetation Dryness Index (TVDI) as a representation of land surface temperature and Normalized Difference Vegetation Index (NDVI) ratio. Correlations of both index distributions are investigated. Possible index dependencies onvegetation cover and underlying soil types are explored. Field measurements of soil moistureare related to the derived indices. The results indicate that according to a very low Pearson correlation coefficient of 0.023, nolinear dependency between the two indices existed. Index classification in low, medium and high value categories did not result in higher correlations. Neither index distribution is found to berelated to soil types and only the TVDI correlates alongside changes in vegetation cover distribution. In situ measured values correlate better with TVDIs, although neither index is considered to give superior results in the area due to low correlation coefficients. The decision which index to apply is dependent on available data, intent of usage and scale. The TWI surface is considered to be a more suitable soil moisture representation for analyses on smaller scaleswhereas the TVDI should prove more valuable on a larger, regional scale. The lack of correlation between the indices is attributed to the fact that they differ greatly in their underlying theories. However, the synthesis of hydrologic modelling and remote sensing is a promising field of research. The establishment of combined effective models for soil moisture determination over large areas requires more extensive in situ measurements and methods to fully assess the models’ capabilities, limitations and value for hydrological predictions.
232

Cybersäkerhet: Från reaktiv till proaktiv

Waregård, Ellen, Wilke, Frida January 2022 (has links)
The number of reported cybercrimes in Sweden is increasing every year. Cybercrimes arebecoming more sophisticated and the attackers are more skilled than before. Attackers usedifferent tactics, techniques and procedures, TTP, to establish their goals. These TTP can beidentified and later used to combat future cyberattacks. This process is known as TacticalThreat Intelligence, TTI, and is characterized by the use of open source intelligence, OSINT, to gather information about previous attacks and TTP. This paper is a literature review toprovide a background of the topic. To further investigate the topic this paper also presents theanalyzis of three different threat intelligence sharing platforms to deepen the understanding ofhow TTI is used today. A statistical analysis is also presented in order to predict future ofcyberthreats. The results of the analysis of the threat intelligence sharing platforms clearly displays theneed to search for information in more than one source. This information will become thefoundation of intelligence, which makes information gathering one of the most importantsteps when working with TTI. The results of the statistical analysis show that cybercrime inSweden will continue to rise. One of the biggest challenges was to identify the current stateof the global cyberthreat landscape since global statistics for cybercrime could not be found.However, the Covid-19 pandemic has forced more people to work from home which hasincreased the number of potential cybercrime victims since home security tends to be lowerthan at a physical offic. Despite this, the number of reported cybercrimes has not increasedremarkably.
233

Formal security verification of the Drone Remote Identification Protocol using Tamarin / Formell säkerhetsverifiering av Drone Remote Identification Protocol med hjälp av Tamarin

Ahokas, Jakob, Persson, Jonathan January 2022 (has links)
The current standard for remote identification of unmanned aircraft does not contain anyform of security considerations, opening up possibilities for impersonation attacks. Thenewly proposed Drone Remote Identification Protocol aims to change this. To fully ensurethat the protocol is secure before real world implementation, we conduct a formal verification using the Tamarin Prover tool, with the goal of detecting possible vulnerabilities. Theunderlying technologies of the protocol are studied and important aspects are identified.The main contribution of this thesis is the formal verification of session key secrecy andmessage authenticity within the proposed protocol. Certain aspects of protocol securityare still missing from the scripts, but the protocol is deemed secure to the extent of themodel. Many features of both the protocol and Tamarin Prover are presented in detail,serving as a potential base for the continued work toward a complete formal verificationof the protocol in the future.
234

Intrusion Detection For The Controller Pilot Data Link Communication : Detecting CPDLC attacks using machine learning / Intrångsdetektering för CPDLC

Westergren, Adam, Skoglund, Alexander January 2022 (has links)
Controller Pilot Data Link Communications (CPDLC) is a system for text-based communication between air traffic control and flight crew. It currently lacks protection against many common types of attacks, making the system vulnerable to attackers. This can have severe consequences for the safety and reliability of air travel. One such attack is alteration attacks. This thesis focuses on detecting alteration attacks with the use of machine learning. It also goes over how CPDLC messages are structured and how to prepare a dataset of CPDLC messages before applying machine learning models. Using Datawig for data imputation made it possible to prepare the dataset by filling in missing values, which could be used for machine learning. With the prepared dataset, two deep learning models, RNN and LSTM, were trained on the dataset to identify genuine and fabricated messages. The dataset consists of a combination of real and altered CPDLC messages. It was found that both models could be used, with high accuracy, to identify real and fake CPDLC messages from the dataset. The implication of this means it is possible to build and train models to detect and differentiate altered messages from genuine messages, which could be further built upon to develop a system for both detecting and preventing alteration attacks.
235

Bildkomprimering med autoencoder och discrete cosine transform / Image compression with autoencoder & the discrete cosine transform

Larsson, Martin January 2022 (has links)
Millions of pictures are captured each year for different purposes, making digital images an ubiquitous part of modern day life. This proliferation was made possible by image compression standards since these images need to be stored somewhere and somehow. In this thesis I explore the use of machine learning together with the discrete cosine transform to compress images. An autoencoder was developed which was able to compress images with results comparable to the JPEG standard. The results lend credence to the hypothesis that the combination of a simple autoencoder and the discrete cosine transform offers a simple and effective method for image compression.
236

Algoritm för automatiserad generering av metadata

Karlsson, Fredrik, Berg, Fredrik January 2015 (has links)
Sveriges Radio stores their data in large archives which makes it hard to retrieve specific information. The sheer size of the archives makes retrieving information about a specific event difficult and causes a big problem. To solve this problem a more consistent use of metadata is needed. This resulted in an investigation about metadata and keyword genera-tion.The appointed task was to automatically generate keywords from transcribed radio shows. This included an investigation of which systems and algorithms that can be used to generate keywords, based on previous works. An application was also developed which suggests keywords based on a text to a user. This application was tested and compared to other al-ready existing software, as well as different methods/techniques based on both linguistic and statistic algorithms. The resulting analysis displayed that the developed application generated many accurate keywords, but also a large amount of keywords in general. The comparison also showed that the recall for the developed algorithm got better results than the already existing software, which in turn produced a better precision in their keywords. / Sveriges Radio sparar sin data i stora arkiv vilket gör det svårt att hitta specifik information. På grund av denna storlek blir uppgiften att hitta specifik information om händelser ett stort problem. För att lösa problemet krävs en mer konsekvent användning av metadata, därför har en undersökning om metadata och nyckelordsgenerering gjorts.Arbetet gick ut på att utveckla en algoritm som automatisk kan generera nyckelord från transkriberade radioprogram. Det ingick också i arbetet att göra en undersökning av tidigare arbeten för att se vilka system och algoritmer som kan användas för att generera nyckelord. Dessutom utvecklades en applikation som generar färdiga nyckelord som förslag till en användare. Denna applikation jämfördes och utvärderades med redan existerande program. Metoderna som använts bygger på både lingvistiska och statistiska algoritmer. En analys av resultaten gjordes och visade att den utvecklade applikationen genererade många precisa nyckelord, men även till antalet stora mängder nyckelord. Jämförelsen med ett redan existe-rande program visade att täckningen var bättre för den utvecklade applikationen, samtidigt som precisionen var bättre för det redan existerande programmet.
237

Stochastic Watershed : A Comparison of Different Seeding Methods

Gustavsson, Kenneth, Bengtsson Bernander, Karl January 2012 (has links)
We study modifications to the novel stochastic watershed method for segmentation of digital images. This is a stochastic version of the original watershed method which is repeatedly realized in order to create a probability density function for the segmentation. The study is primarily done on synthetic images with both same-sized regions and differently sized regions, and at the end we apply our methods on two endothelial cell images of the human cornea. We find that, for same-sized regions, the seeds should be placed in a spaced grid instead of a random uniform distribution in order to yield a more accurate segmentation. When images with differently sized regions are being segmented, the seeds should be placed dependent on the gradient, and by also adding uniform or gaussian noise to the image in every iteration a satisfactory result is obtained.
238

Predictive modeling med maximal entropi för att förutsäga platser med fornnordisk bosättning

Rönnlund, Elias January 2021 (has links)
En komplett bild av bosättningar från förhistorisk tid har alltid varit svår att kartlägga med tanke på hur tiden gömt undan dessa platser och lämningar genom nedbrytning av det material de tillverkats av och uppbyggnaden av nya lager av sediment. Arkeologer har genom tiden använt sig av en mängd olika typer av metoder och tekniker för att finna spår av dessa förhistoriska lämningar. I modern tid har GIS blivit ett vanligt användningsområde till att assistera den här processen. I den här studien är det ”predictive modeling” som använts för att förutsäga sannolikheten av att kunna hitta nya arkeologiska fynd baserat på redan funna och dess samband med egenskaper i landskapet och miljön. Med en relativt ny metod som använder sig av principen för maximal entropi i sin algoritm hoppas den här studien kunna visa prov på potentialen för den här tekniken i Sverige till att underlätta arkeologers arbete samt ge en inblick i det förflutna när det gäller människors framgång och val av bosättning. Genom att skapa modeller med programvaran Maxent producerades sannolikhetskartor över studieområdet baserat på 221 fyndplatser och upp till 16 faktorer samt statistiska diagram för att ge en djupare inblick i modellens byggnadsprocess. Validering av resultatet visade prov på mycket stor framgång. Trots det utmärkta resultatet finns en viss skepsis i hur behjälplig just den här modellen vore för arkeologin i att hitta nya bosättningar från forntiden. I och med att den här studien är rätt begränsad i sin tillgång till data har den ändå visat potentialen i hur algoritmer med användning av principen för maximal entropi har för arkeologin inom Sverige. Med ett större och mer precisare urval av fyndplatser och faktorer, både över miljö, landskap och övrigt, har modeller som denna en stor potential till att både assistera arkeologin att hitta fortfarande gömda fornnordiska boplatser och utvinna information om forntida människors liv och samhällen.
239

Evaluating player performance and usability of graphical FPS interfaces in VR

Willemsen, Mattias January 2019 (has links)
Background. When designing video games for Virtual Reality, graphical user interfaces (GUIs) cannot always be designed as they have been for traditional video games. An often recommended approach is to merge the interface with the game world, but it is unclear if this is the best idea in all cases. As the market for Virtual Reality is growing quickly, more research is needed to create an understanding of how GUIs should be designed for Virtual Reality. Objectives. The thesis researches existing GUI type classifications and adapts them for VR. A method to compare the GUI types to each other is selected and conclusions are drawn about how they affect player performance, usability, and preference.  Methods. A VR FPS game is developed and an experiment is designed using it. The experiment tests the player's performance with each of three distinct GUI types and also presents questionnaires to get their personal preference. Results. Both player performance and subjective opinion seems to favour geometric GUI. Conclusions. The often recommended approach for designing GUI elements as part of the game world may not always be the best option, as it may sacrifice usability and performance for immersion. / Bakgrund. Vid design av grafiska användargränssnitt (GUI) till Virtual Reality kan inte alltid samma design användas som tidigare har använts i traditionella TV-spel. Ett tillvägagångssätt som ofta rekommenderas är att göra spelets GUI en del av spelvärlden men det är oklart om det är den bästa strategin när man väger in prestation, användbarhet, och preferens. Eftersom marknaden för Virtual Reality växer snabbt behövs mer forskning för att få en förståelse för hur grafiska användargränssnitt bör designas. Syfte. Detta examensarbete undersöker existerande klassifikationer av GUI:n och anpassar dem för VR. En metod för att jämföra de olika typerna av GUI väljs ut och slutsatser dras om hur de påverkar prestanda, användbarhet och preferens. Metod. Ett VR FPS-spel utvecklas och ett experiment designas baserat på det. Experimentet testar spelares prestanda med tre olika typer av GUI. Spelarna svarar även på enkäter om upplevd användbarhet och preferens. Resultat. Data från både prestanda och personlig preferens verkar förespråka Geometriskt GUI. Slutsatser. Metoden som ofta rekommenderas där spelets GUI arbetas in som en del av spelvärlden verkar inte alltid vara det bästa valet då det kan offra användbarhet och prestanda för inlevelse.
240

Exploration of using Blockchaintechnology for forensically acceptableaudit trails with acceptableperformance impacts

Sobeh, Abedallah January 2019 (has links)
In this work, we will test the possibility to use Blockchain to preserve data suchas logs. Data inside Blockchain is preserved to be used as digital evidence. Thestudy will examine if Blockchain technology will satisfy the requirement for digitalevidence in a Swedish court. The study will simulate different test scenarios. Eachscenario will be tested on three different hardware configurations. The test has twomain categories, stream test and batch test. In stream test, we test performanceimpact on different systems in case each log is sent in a separate block. While inbatch test, we have two categories batch with data and batch without data. In thistest, we simulate sending 80GB of data each day. In total we send 80GB of data,but the difference here is that we change the time between each block and adjustthe size of the block. In our tests, we focused on three metrics: CPU load, networkbandwidth usage and storage consumption for each scenario. After the tests, wecollected the data and compared the results of each hardware configuration withinthe same scenario. It was concluded that Blockchain does not scale up in streammode, and it is limited to ten blocks/s regardless of hardware configuration. On theother hand, Blockchain can manage 80GB of data each day without stressing systemresources. / Det följande arbetet undersöker vilka möjligheter som Blockchain har som ett verk-tyg för att spara och bevara känslig data, för att kunna användas som digitala be-vis. Dessutom ska studien undersöka giltigheten av Blockchain-tekniken som bevisi domstolen. Studien bygger på ett test som simulerar 15 scenarier med tre olikahårdvarukonfigurationer. Testet delas upp i två huvudkategorier, stream test ochbatch test. I stream testet, testar vi prestationseffekten på olika system när varjelogg skickas i ett separat block. Under batch testet har vi två underkategorier vilkaär batch med data och batch utan data. I batch testet simulerar vi att skicka 80GB data varje dag. Under batch testet har vi dessutom testat att ändra på tidenmellan varje block generering och även justerat blockens storlek. I våra test har vifokuserat på tre mätvärden: CPU-belastning, användning av nätverksbandbredd ochkonsumtion av lagringsutrymmet i varje scenario. När samtliga test slutförts, bör-jade vi med datainsamling och jämförde resultaten från varje system inom sammascenario. Slutsatsen är att Blockchain inte skalar upp i stream testet, då max antalblock som skapas och skickas till data-noder är begränsat till tio block/sek, oavsetthårdvarukonfiguration. Däremot, vid batch testet , kan Blockchain hantera över-föring av 80 GB data varje dag (24 timmar) utan att anstränga systemsresurser.

Page generated in 0.1487 seconds