• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 266
  • 102
  • 79
  • 77
  • 66
  • 49
  • 49
  • 48
  • 47
  • 44
  • 39
  • 38
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

WiSDM: a platform for crowd-sourced data acquisition, analytics, and synthetic data generation

Choudhury, Ananya 15 August 2016 (has links)
Human behavior is a key factor influencing the spread of infectious diseases. Individuals adapt their daily routine and typical behavior during the course of an epidemic -- the adaptation is based on their perception of risk of contracting the disease and its impact. As a result, it is desirable to collect behavioral data before and during a disease outbreak. Such data can help in creating better computer models that can, in turn, be used by epidemiologists and policy makers to better plan and respond to infectious disease outbreaks. However, traditional data collection methods are not well suited to support the task of acquiring human behavior related information; especially as it pertains to epidemic planning and response. Internet-based methods are an attractive complementary mechanism for collecting behavioral information. Systems such as Amazon Mechanical Turk (MTurk) and online survey tools provide simple ways to collect such information. This thesis explores new methods for information acquisition, especially behavioral information that leverage this recent technology. Here, we present the design and implementation of a crowd-sourced surveillance data acquisition system -- WiSDM. WiSDM is a web-based application and can be used by anyone with access to the Internet and a browser. Furthermore, it is designed to leverage online survey tools and MTurk; WiSDM can be embedded within MTurk in an iFrame. WiSDM has a number of novel features, including, (i) ability to support a model-based abductive reasoning loop: a flexible and adaptive information acquisition scheme driven by causal models of epidemic processes, (ii) question routing: an important feature to increase data acquisition efficacy and reduce survey fatigue and (iii) integrated surveys: interactive surveys to provide additional information on survey topic and improve user motivation. We evaluate the framework's performance using Apache JMeter and present our results. We also discuss three other extensions of WiSDM: Adapter, Synthetic Data Generator, and WiSDM Analytics. The API Adapter is an ETL extension of WiSDM which enables extracting data from disparate data sources and loading to WiSDM database. The Synthetic Data Generator allows epidemiologists to build synthetic survey data using NDSSL's Synthetic Population as agents. WiSDM Analytics empowers users to perform analysis on the data by writing simple python code using Versa APIs. We also propose a data model that is conducive to survey data analysis. / Master of Science
102

Topographic Effects in Strong Ground Motion

Rai, Manisha 14 September 2015 (has links)
Ground motions from earthquakes are known to be affected by earth's surface topography. Topographic effects are a result of several physical phenomena such as the focusing or defocusing of seismic waves reflected from a topographic feature and the interference between direct and diffracted seismic waves. This typically causes an amplification of ground motion on convex features such as hills and ridges and a de-amplification on concave features such as valleys and canyons. Topographic effects are known to be frequency dependent and the spectral accelerations can sometimes reach high values causing significant damages to the structures located on the feature. Topographically correlated damage pattern have been observed in several earthquakes and topographic amplifications have also been observed in several recorded ground motions. This phenomenon has also been extensively studied through numerical analyses. Even though different studies agree on the nature of topographic effects, quantifying these effects have been challenging. The current literature has no consensus on how to predict topographic effects at a site. With population centers growing around regions of high seismicity and prominent topographic relief, such as California, and Japan, the quantitative estimation of the effects have become very important. In this dissertation, we address this shortcoming by developing empirical models that predict topographic effects at a site. These models are developed through an extensive empirical study of recorded ground motions from two large strong-motion datasets namely the California small to medium magnitude earthquake dataset and the global NGA-West2 datasets, and propose topographic modification factors that quantify expected amplification or deamplification at a site. To develop these models, we required a parameterization of topography. We developed two types of topographic parameters at each recording stations. The first type of parameter is developed using the elevation data around the stations, and comprise of parameters such as smoothed slope, smoothed curvature, and relative elevation. The second type of parameter is developed using a series of simplistic 2D numerical analysis. These numerical analyses compute an estimate of expected 2D topographic amplification of a simple wave at a site in several different directions. These 2D amplifications are used to develop a family of parameters at each site. We study the trends in the ground motion model residuals with respect to these topographic parameters to determine if the parameters can capture topographic effects in the recorded data. We use statistical tests to determine if the trends are significant, and perform mixed effects regression on the residuals to develop functional forms that can be used to predict topographic effect at a site. Finally, we compare the two types of parameters, and their topographic predictive power. / Ph. D.
103

<b>L</b><b>I</b><b>DAR-BASED QUANTIFICATION OF INDIANA LAKE MICHIGAN SHORELINE CHANGES</b>

Tasmiah Ahsan (12503458) 18 April 2024 (has links)
<p dir="ltr">Recent high-water levels in Lake Michigan caused extensive shoreline changes along the Indiana coastline. To evaluate recent shoreline changes of the Indiana coastline along Lake Michigan, topographic LiDAR surveys available for the years 2008, 2012, 2013, 2018, 2020, and 2022 were analyzed. This study included LiDAR data of over 400 cross-shore transects, generated at 100 m spacing. Beach profiles were generated to detect the shoreline position and quantify beach width and nearshore volume change. The analysis revealed accretion of both shoreline and beach width from 2008 to 2013 during a low water level period. The beach was rebuilt with a median increased value of 4 m. On the contrary, the shoreline eroded during increasing and high-water periods. Both shoreline and beach width receded with median values of 41 m and 32 m respectively during the period of water level increase from 2013 to 2020. Consequently, the beach profiles lost a median sand volume of 21.6 m<sup>3</sup>/m. Overall, the Indiana shoreline moved with a median of 18 m landward from 2008 to 2022. However, there was a large amount of spatial variability in the shoreline changes. The shoreline movement varied spatially between 63 m recession to 29 m accretion. Similarly, beach profiles showed a loss of median sand volume of 10 m<sup>3</sup>/m. The volume change ranged from 918 m<sup>3</sup>/m loss to 296 m<sup>3</sup>/m accumulation varying spatially along the shoreline. The largest sand loss was experienced at the downdrift of Michigan city harbor near Mt. Baldy. In addition to the spatial variation, the recession also varied slightly with shoreline type. The natural and hardened beaches were mostly recessional. The recession along the hardened shoreline was influenced by the timing of construction and its proximity to inland areas. Buffered beaches, characterized by a swath of vegetation or dunes, experienced the least erosion.</p>
104

A Comparison of SVM Classifiers with Embedded Feature Selection

Johansson, Adam, Mattsson, Anton January 2024 (has links)
Since their introduction in 1995, Support Vector Machines (SVM) have come to be a widely employed machine learning model for binary classification, owing to their explainable architecture, efficient forward inference, and good ability to generalize. A common desire, not only for SVMs but for machine learning classifiers in general, is to have the model do feature selection, using only a limited subset of the available attributes in its predictions. Various alterations to the SVM problem formulation exist that address this, and in this report we compare a range of such SVM models. We compare how the accuracy and feature selection compare between the models for different datasets, both real and synthetic, and we also investigate the impact of dataset size on the aforementioned quantities.  Our conclusions are that models trained to classify samples based on a smaller subset of features, tend to perform at a comparable level to dense models, with particular advantage when the dataset is small. Furthermore, as the training dataset grows in size, the number of selected features also increases, giving a more complex classifier when prompted with a larger data supply.
105

Počítání vozidel v statickém obraze / Counting Vehicles in Static Images

Zemánek, Ondřej January 2020 (has links)
Tato práce se zaměřuje na problém počítání vozidel v statickém obraze bez znalosti geometrických vlastností scény. V rámci řešení bylo implementováno a natrénováno 5 architektur konvolučních neuronových sítí. Také byl pořízen rozsáhlý dataset s 19 310 snímky pořízených z 12pohledů a zachycujících 7 různých scén. Použité konvoluční sítě mapují vstupní vzorek na mapu hustoty vozidel, ze které lze získat jejich počet a lokalizaci v kontextu vstupního snímku. Hlavním přínosem této práce je porovnání a aplikace dosavadních nejlepších řešení pro počítání objektů v obraze. Většina z těchto architektur byla navržena pro počítání lidí v obraze, proto musely být uzpůsobeny pro potřeby počítání vozidel v statickém obraze. Natrénované modely jsou vyhodnoceny GAME metrikou na TRANCOS datasetu a na velkém spojeném datasetu. Dosažené výsledky všech modelů jsou následně popsány a porovnány.
106

Detection of facade cracks using deep learning

Eriksson, Linus January 2020 (has links)
Facade cracks are a common problem in the north of Sweden due to shifting temperatures creating frost in the facades which ultimately damages the facades, often in the form of cracks. To fix these cracks, workers must visually inspect the facades to find them which is a difficult and time-consuming task. This project explores the possibilities of creating an algorithm that can classify cracks on facades with the help of deep learning models. The idea is that in the future, an algorithm like this could be implemented on a drone that hoovers around buildings, filming the facade, and reporting back if there are any damages to the facade. The work in this project is exploratory and the path of convolutional neural networks has been explored, as well as the possibility to simulate training data due to the lack of real-world data. The experimental work in this project led to some interesting conclusions for further work. The relatively small amount of data used in this project points towards the possibility of using simulated data as a complement to real data, as well as the possibility of using convolutional neural networks as a means of classifying facades for crack recognition. The data and conclusions collected in this report can be used as a preparatory work for a working prototype algorithm.
107

Enhancing Long-Term Human Motion Forecasting using Quantization-based Modelling. : Integrating Attention and Correlation for 3D Motion Prediction / Förbättring av långsiktig prognostisering av mänsklig rörelse genom kvantisering-baserad modellering. : Integrering av uppmärksamhet och korrelation för 3D-rörelseförutsägelse.

González Gudiño, Luis January 2023 (has links)
This thesis focuses on addressing the limitations of existing human motion prediction models by extending the prediction horizon to very long-term forecasts. The objective is to develop a model that achieves one of the best stable prediction horizons in the field, providing accurate predictions without significant error increase over time. Through the utilization of quantization based models our research successfully achieves the desired objective with the proposed aligned version of Mean Per Joint Position Error. The first of the two proposed models, an attention-based Vector Quantized Variational AutoEncoder, demonstrates good performance in predicting beyond conventional time boundaries, maintaining low error rates as the prediction horizon extends. While slight discrepancies in joint positions are observed, the model effectively captures the underlying patterns and dynamics of human motion, which remains highly applicable in real-world scenarios. Furthermore, our investigation into a correlation-based Vector Quantized Variational AutoEncoder, as an alternative to attention-based one, highlights the challenges in capturing complex relationships and meaningful patterns within the data. The correlation-based VQ-VAE’s tendency to predict flat outputs emphasizes the need for further exploration and innovative approaches to improve its performance. Overall, this thesis contributes to the field of human motion prediction by extending the prediction horizon and providing insights into model performance and limitations. The developed model introduces a novel option to consider when contemplating long-term prediction applications across various domains and sets the foundation for future research to enhance performance in long-term scenarios. / Denna avhandling fokuserar på att hantera begränsningarna i befintliga modeller för förutsägelse av mänskliga rörelser genom att utöka förutsägelsehorisonten till mycket långsiktiga prognoser. Målet är att utveckla en modell som uppnår en av de bästa stabila prognoshorisonterna inom området, vilket ger korrekta prognoser utan betydande felökning över tiden. Genom att använda kvantiseringsbaserade modeller uppnår vår forskning framgångsrikt det önskade målet med den föreslagna anpassade versionen av Mean Per Joint Position Error. Den första av de två föreslagna modellerna, en uppmärksamhetsbaserad Vector Quantized Variational AutoEncoder, visar goda resultat när det gäller att förutsäga bortom konventionella tidsgränser och bibehåller låga felfrekvenser när förutsägelsehorisonten förlängs. Även om små avvikelser i ledpositioner observeras, fångar modellen effektivt de underliggande mönstren och dynamiken i mänsklig rörelse, vilket förblir mycket tillämpligt i verkliga scenarier. Vår undersökning av en korrelationsbaserad Vector Quantized Variational AutoEncoder, som ett alternativ till en uppmärksamhetsbaserad sådan, belyser dessutom utmaningarna med att fånga komplexa relationer och meningsfulla mönster i data. Den korrelationsbaserade VQ-VAE:s tendens att förutsäga platta utdata understryker behovet av ytterligare utforskning och innovativa metoder för att förbättra dess prestanda. Sammantaget bidrar denna avhandling till området för förutsägelse av mänskliga rörelser genom att utöka förutsägelsehorisonten och ge insikter om modellens prestanda och begränsningar. Den utvecklade modellen introducerar ett nytt alternativ att ta hänsyn till när man överväger långsiktiga prediktionstillämpningar inom olika områden och lägger grunden för framtida forskning för att förbättra prestanda i långsiktiga scenarier.
108

Image-classification for Brain Tumor using Pre-trained Convolutional Neural Network : Bildklassificering för hjärntumör medhjälp av förtränat konvolutionell tneuralt nätverk

Osman, Ahmad, Alsabbagh, Bushra January 2023 (has links)
Brain tumor is a disease characterized by uncontrolled growth of abnormal cells inthe brain. The brain is responsible for regulating the functions of all other organs,hence, any atypical growth of cells in the brain can have severe implications for itsfunctions. The number of global mortality in 2020 led by cancerous brains was estimatedat 251,329. However, early detection of brain cancer is critical for prompttreatment and improving patient’s quality of life as well as survival rates. Manualmedical image classification in diagnosing diseases has been shown to be extremelytime-consuming and labor-intensive. Convolutional Neural Networks (CNNs) hasproven to be a leading algorithm in image classification outperforming humans. Thispaper compares five CNN architectures namely: VGG-16, VGG-19, AlexNet, EffecientNetB7,and ResNet-50 in terms of performance and accuracy using transferlearning. In addition, the authors discussed in this paper the economic impact ofCNN, as an AI approach, on the healthcare sector. The models’ performance isdemonstrated using functions for loss and accuracy rates as well as using the confusionmatrix. The conducted experiment resulted in VGG-19 achieving best performancewith 97% accuracy, while EffecientNetB7 achieved worst performance with93% accuracy. / Hjärntumör är en sjukdom som kännetecknas av okontrollerad tillväxt av onormalaceller i hjärnan. Hjärnan är ansvarig för att styra funktionerna hos alla andra organ,därför kan all onormala tillväxt av celler i hjärnan ha allvarliga konsekvenser för dessfunktioner. Antalet globala dödligheten ledda av hjärncancer har uppskattats till251329 under 2020. Tidig upptäckt av hjärncancer är dock avgörande för snabb behandlingoch för att förbättra patienternas livskvalitet och överlevnadssannolikhet.Manuell medicinsk bildklassificering vid diagnostisering av sjukdomar har visat sigvara extremt tidskrävande och arbetskrävande. Convolutional Neural Network(CNN) är en ledande algoritm för bildklassificering som har överträffat människor.Denna studie jämför fem CNN-arkitekturer, nämligen VGG-16, VGG-19, AlexNet,EffecientNetB7, och ResNet-50 i form av prestanda och noggrannhet. Dessutom diskuterarförfattarna i studien CNN:s ekonomiska inverkan på sjukvårdssektorn. Modellensprestanda demonstrerades med hjälp av funktioner om förlust och noggrannhetsvärden samt med hjälp av en Confusion matris. Resultatet av det utfördaexperimentet har visat att VGG-19 har uppnått bästa prestanda med 97% noggrannhet,medan EffecientNetB7 har uppnått värsta prestanda med 93% noggrannhet.
109

Multimodální zpracování dat a mapování v robotice založené na strojovém učení / Machine Learning-Based Multimodal Data Processing and Mapping in Robotics

Ligocki, Adam January 2021 (has links)
Disertace se zabývá aplikaci neuronových sítí pro detekci objektů na multimodální data v robotice. Celkem cílí na tři oblasti: tvorbu datasetu, zpracování multimodálních dat a trénování neuronových sítí. Nejdůležitější části práce je návrh metody pro tvorbu rozsáhlých anotovaných datasetů bez časové náročného lidského zásahu. Metoda používá neuronové sítě trénované na RGB obrázcích. Užitím dat z několika snímačů pro vytvoření modelu okolí a mapuje anotace z RGB obrázků na jinou datovou doménu jako jsou termální obrázky, či mračna bodů. Pomoci této metody autor vytvořil dataset několika set tisíc anotovaných obrázků a použil je pro trénink neuronové sítě, která následně překonala modely trénované na menších, lidmi anotovaných datasetech. Dále se autor v práci zabývá robustností detekce objektů v několika datových doménách za různých povětrnostních podmínek. Práce také popisuje kompletní řetězec zpracování multimodálních dat, které autor vytvořil během svého doktorského studia. To Zahrnuje vývoj unikátního senzorického zařízení, které je vybavené řadou snímačů běžně užívaných v robotice. Dále autor popisuje proces tvorby rozsáhlého, veřejně dostupného datasetu Brno Urban Dataset. Na závěr autor popisuje software, který vznikl během jeho studia a jak je tento software užit při zpracování dat v rámci jeho práce (Atlas Fusion a Robotic Template Library).
110

Towards Realistic Datasets forClassification of VPN Traffic : The Effects of Background Noise on Website Fingerprinting Attacks / Mot realistiska dataset för klassificering av VPN trafik : Effekten av bakgrundsoljud på website fingerprint attacker

Sandquist, Christoffer, Ersson, Jon-Erik January 2023 (has links)
Virtual Private Networks (VPNs) is a booming business with significant margins once a solid user base has been established and big VPN providers are putting considerable amounts of money into marketing. However, there exists Website Fingerprinting (WF) attacks that are able to correctly predict which website a user is visiting based on web traffic even though it is going through a VPN tunnel. These attacks are fairly accurate when it comes to closed world scenarios but a problem is that these scenarios are still far away from capturing typical user behaviour.In this thesis, we explore and build tools that can collect VPN traffic from different sources. This traffic can then be combined into more realistic datasets that we evaluate the accuracy of WF attacks on. We hope that these datasets will help us and others better simulate more realistic scenarios.Over the course of the project we developed automation scripts and data processing tools using Bash and Python. Traffic was collected on a server provided by our university using a combination of containerisation, the scripts we developed, Unix tools and Wireshark. After some manual data cleaning we combined our captured traffic together with a provided dataset of web traffic and created a new dataset that we used in order to evaluate the accuracy of three WF attacks.By the end we had collected 1345 capture files of VPN traffic. All of the traffic were collected from the popular livestreaming website twitch.tv. Livestreaming channels were picked from the twitch.tv frontpage and we ended up with 245 unique channels in our dataset. Using our dataset we managed to decrease the accuracy of all three tested WF attacks from 90% down to 47% with a WF attack confidence threshold of0.0 and from 74% down to 17% with a confidence threshold of 0.99. Even though this is a significant decrease in accuracy it comes with a roughly tenfold increase in the number of captured packets for the WF attacker.Thesis artifacts are available at github.com/C-Sand/rds-collect. / Virtual Private Network (VPN) marknaden har växt kraftigt och det finns stora marginaler när en solid användarbas väl har etablerats. Stora VPN-leverantörer lägger dessutom avsevärda summor pengar på marknadsföring. Det finns dock WF-attacker som kan korrekt gissa vilken webbplats en användare besöker baserat på webbtrafik, även om den går genom en VPN-tunnel.Dessa attacker har rätt bra precision när det kommer till scenarier i sluten värld, men problemet är att dessa fortfarande är långt borta från att simulera typiskt användarbeteende.I det här examensarbetet utforskar och bygger vi verktyg som kan samla in VPNtrafik från olika källor. Trafiken kan användas för att kombineras till mera realistiska dataset och sedan användas för att utvärdera träffsäkerheten av WF-attacker. Vi hoppas att dessa dataset kommer att hjälpa oss och andra att bättre simulera verkliga scenarier.Under projektets gång utvecklade vi ett par automatiserings skript och verktyg för databearbetning med hjälp av Bash och Python. Trafik samlades in på en server från vårt universitet med en kombination av containeriseringen, skripten vi utvecklade, Unix-verktyg och Wireshark. Efter en del manuell datarensning kombinerade vi vår infångade trafik tillsammans med det tillhandahållna datasetet med webbtrafik och skapade ett nytt dataset som vi använde för att utvärdera riktigheten av tre WF attacker.Vid slutet hade vi samlat in 1345 filer med VPN-trafik. All trafik samlades in från den populära livestream plattformen twitch.tv. Livestreamingkanaler plockades ut från twitchs förstasida och vi slutade med 245 unika kanaler i vårat dataset. Med hjälp av vårat dataset lyckades vi minska noggrannheten för alla tre testade WF-attacker från 90% ner till 47% med tröskeln på 0,0 och från 74% ner till 17% med en tröskel på 0,99. Även om detta är en betydande minskning av noggrannheten kommer det med en ungefär tiofaldig ökning av antalet paket. I slutändan samlade vi bara trafik från twitch.tv men fick ändå några intressanta resultat och skulle gärna se fortsatt forskning inom detta område.Kod, instruktioner, dataset och andra artefakter finns tillgängliga via github.com/CSand/rds-collect.

Page generated in 0.0377 seconds