• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 266
  • 102
  • 79
  • 77
  • 66
  • 49
  • 49
  • 48
  • 47
  • 44
  • 39
  • 38
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Object detection for autonomous trash and litter collection / Objektdetektering för autonom skräpupplockning

Edström, Simon January 2022 (has links)
Trashandlitter discarded on the street is a large environmental issue in Sweden and across the globe. In Swedish cities alone it is estimated that 1.8 billion articles of trash are thrown to the street each year, constituting around 3 kilotons of waste. One avenue to combat this societal and environmental problem is to use robotics and AI. A robot could learn to detect trash in the wild and collect it in order to clean the environment. A key component of such a robot would be its computer vision system which allows it to detect litter and trash. Such systems are not trivially designed or implemented and have only recently reached high enough performance in order to work in industrial contexts. This master thesis focuses on creating and analysing such an algorithm by gathering data for use in a machine learning model, developing an object detection pipeline and evaluating the performance of that pipeline based on varying its components. Specifically, methods using hyperparameter optimisation, psuedolabeling and the preprocessing methods tiling and illumination normalisation were implemented and analysed. This thesis shows that it is possible to create an object detection algorithm with high performance using currently available state-of-the-art methods. Within the analysed context, hyperparameter optimisation did not significantly improve performance and psuedolabeling could only briefly be analysed but showed promising results. Tiling greatly increased mean average precision (mAP) for the detection of small objects, such as cigarette butts, but decreased the mAP for large objects and illumination normalisation improved mAPforimagesthat were brightly lit. Both preprocessing methods reduced the frames per second that a full detector could run at whilst psuedolabeling and hyperparameter optimisation greatly increased training times. / Skräp som slängs på marken har en stor miljöpåverkan i Sverige och runtom i världen. Enbart i Svenska städer uppskattas det att 1,8 miljarder bitar skräp slängs på gatan varje år, bestående av cirka 3 kiloton avfall. Ett sätt att lösa detta samhälleliga och miljömässiga problem är att använda robotik och AI. En robot skulle kunna lära siga att detektera skräp i utomhusmiljöer och samla in den för att på så sätt rengöra våra städer och vår natur. En nyckelkomponent av en sådan robot skulle vara dess system för datorseende som tillåter den att se och hitta skräp. Sådana system är inte triviala att designa eller implementera och har bara nyligen påvisat tillräckligt hög prestanda för att kunna användas i kommersiella sammanhang. Detta masterexamensarbete fokuserar på att skapa och analysera en sådan algoritm genom att insamla data för att använda i en maskininlärningsmodell, utveckla en objektdetekterings pipeline och utvärdera prestandan när dess komponenter modifieras. Specifikt analyseras metoderna pseudomarkering, hyperparameter optimering samt förprocesseringsmetoderna kakling och ljusintensitetsnormalisering. Examensarbetet visar att det är möjligt att skapa en objektdetekteringsalgoritm med hög prestanda med hjälp av den senaste tekniken på området. Inom det undersökta sammanhanget gav hyperparameter optimering inte någon större förbättring av prestandan och pseudomarkering kunde enbart ytligt analyseras men uppvisade preliminärt lovande resultat. Kakling förbättrade resultatet för detektering av små objekt, som cigarettfimpar, men minskade prestandan för större objekt och ljusintensitetsnormalisering förbättrade prestandan för bilder som var starkt belysta. Båda förprocesseringsmetoderna minskade bildhastigheten som en detektor skulle kunna köra i och psuedomarkering samt hyperparameter optimering ökade träningstiden kraftigt.
112

Image-classification for Brain Tumor using Pre-trained Convolutional Neural Network / Bildklassificering för hjärntumör med hjälp av förtränat konvolutionellt neuralt nätverk

Alsabbagh, Bushra January 2023 (has links)
Brain tumor is a disease characterized by uncontrolled growth of abnormal cells in the brain. The brain is responsible for regulating the functions of all other organs, hence, any atypical growth of cells in the brain can have severe implications for its functions. The number of global mortality in 2020 led by cancerous brains was estimated at 251,329. However, early detection of brain cancer is critical for prompt treatment and improving patient’s quality of life as well as survival rates. Manual medical image classification in diagnosing diseases has been shown to be extremely time-consuming and labor-intensive. Convolutional Neural Networks (CNNs) has proven to be a leading algorithm in image classification outperforming humans. This paper compares five CNN architectures namely: VGG-16, VGG-19, AlexNet, EffecientNetB7, and ResNet-50 in terms of performance and accuracy using transfer learning. In addition, the authors discussed in this paper the economic impact of CNN, as an AI approach, on the healthcare sector. The models’ performance is demonstrated using functions for loss and accuracy rates as well as using the confusion matrix. The conducted experiment resulted in VGG-19 achieving best performance with 97% accuracy, while EffecientNetB7 achieved worst performance with 93% accuracy. / Hjärntumör är en sjukdom som kännetecknas av okontrollerad tillväxt av onormala celler i hjärnan. Hjärnan är ansvarig för att styra funktionerna hos alla andra organ, därför kan all onormala tillväxt av celler i hjärnan ha allvarliga konsekvenser för dess funktioner. Antalet globala dödligheten ledda av hjärncancer har uppskattats till 251329 under 2020. Tidig upptäckt av hjärncancer är dock avgörande för snabb behandling och för att förbättra patienternas livskvalitet och överlevnadssannolikhet. Manuell medicinsk bildklassificering vid diagnostisering av sjukdomar har visat sig vara extremt tidskrävande och arbetskrävande. Convolutional Neural Network (CNN) är en ledande algoritm för bildklassificering som har överträffat människor. Denna studie jämför fem CNN-arkitekturer, nämligen VGG-16, VGG-19, AlexNet, EffecientNetB7, och ResNet-50 i form av prestanda och noggrannhet. Dessutom diskuterar författarna i studien CNN:s ekonomiska inverkan på sjukvårdssektorn. Modellens prestanda demonstrerades med hjälp av funktioner om förlust och noggrannhets värden samt med hjälp av en Confusion matris. Resultatet av det utförda experimentet har visat att VGG-19 har uppnått bästa prestanda med 97% noggrannhet, medan EffecientNetB7 har uppnått värsta prestanda med 93% noggrannhet.
113

Vitiligo image classification using pre-trained Convolutional Neural Network Architectures, and its economic impact on health care / Vitiligo bildklassificering med hjälp av förtränade konvolutionella neurala nätverksarkitekturer och dess ekonomiska inverkan på sjukvården

Bashar, Nour, Alsaid Suliman, MRami January 2022 (has links)
Vitiligo is a skin disease where the pigment cells that produce melanin die or stop functioning, which causes white patches to appear on the body. Although vitiligo is not considered a serious disease, there is a risk that something is wrong with a person's immune system. In recent years, the use of medical image processing techniques has grown, and research continues to develop new techniques for analysing and processing medical images. In many medical image classification tasks, deep convolutional neural network technology has proven its effectiveness, which means that it may also perform well in vitiligo classification. Our study uses four deep convolutional neural networks in order to classify images of vitiligo and normal skin. The architectures selected are VGG-19, ResNeXt101, InceptionResNetV2 and Inception V3. ROC and AUC metrics are used to assess each model's performance. In addition, the authors investigate the economic benefits that this technology may provide to the healthcare system and patients. To train and evaluate the CNN models, the authors used a dataset that contains 1341 images in total. Because the dataset is limited, 5-fold cross validation is also employed to improve the model's prediction. The results demonstrate that InceptionV3 achieves the best performance in the classification of vitiligo, with an AUC value of 0.9111, and InceptionResNetV2 has the lowest AUC value of 0.8560. / Vitiligo är en hudsjukdom där pigmentcellerna som producerar melanin dör eller slutar fungera, vilket får vita fläckar att dyka upp på kroppen. Även om Vitiligo inte betraktas som en allvarlig sjukdom, det finns fortfarande risk att något är fel på en persons immun. Under de senaste åren har användningen av medicinska bildbehandlingstekniker vuxit och forskning fortsätter att utveckla nya tekniker för att analysera och bearbeta medicinska bilder. I många medicinska bildklassificeringsuppgifter har djupa konvolutionella neurala nätverk bevisat sin effektivitet, vilket innebär att den också kan fungera bra i Vitiligo klassificering. Vår studie använder fyra djupa konvolutionella neurala nätverk för att klassificera bilder av vitiligo och normal hud. De valda arkitekturerna är VGG-19, RESNEXT101, InceptionResNetV2 och Inception V3. ROC- och AUC mätvärden används för att bedöma varje modells prestanda. Dessutom undersöker författarna de ekonomiska fördelarna som denna teknik kan ge till sjukvårdssystemet och patienterna. För att träna och utvärdera CNN modellerna använder vi ett dataset som innehåller totalt 1341 bilder. Eftersom datasetet är begränsat används också 5-faldigt korsvalidering för att förbättra modellens förutsägelse. Resultaten visar att InceptionV3 uppnår bästa prestanda i klassificeringen av Vitiligo, med ett AUC -värde på 0,9111, och InceptionResNetV2 har det lägsta AUC -värdet på 0,8560.
114

Exploring Alignment Methods in an Audio Matching Scenario for a Music Practice Tool : A Study of User Demands, Technical Aspects, and a Web-Based Implementation / Utforskning av metoder för delsekvensjustering i ett ljudmatchnings scenario för ett musikövningssverktyg : En studie av användarkrav, tekniska aspekter och en webbaserad implementation

Ferm, Oliwer January 2024 (has links)
This work implements a prototype of a music practice tool, and evaluates alignment methods in an audio matching scenario required for it. By two interviews with piano teachers, we investigated the user demands towards a music performance practice tool that incorporates an alignment technique between a shorter practice segment and a reference performance, from a jazz and classical music point of view. Regarding technical aspects, we studied how Deep Learning (DL) based signal representations compare to standard manually tailored features in the alignment task. Experiments were conducted using a well-known alignment algorithm on a piano dataset. The dataset had manually annotated beat positions which was used for evaluation. We found the traditional features to be superior compared with the DL based signal representations when used independently. We also found that the DL based signal representations, on their own, were insufficient for our test cases. However we found that the DL representations contained valuable information. Multiple test cases demonstrated that the combination of DL representations and traditional representations outperformed all other considered approaches. We also did experiments using deadpan midi renditions as references instead of actual performances, in which we got slight, but insignificant improvement in alignment performance. Finally, the prototype was implemented as a website, using a traditional signal representation as input to the alignment algorithm. / Detta arbete implementerar en prototyp av ett musikövningsverktyg och utvärderar ljudjusteringssmetoder som krävs för det. Användarkraven för verktyget undersöktes genom två intervjuer med pianolärare och fokuserade på ljudmatchning mellan en kort övningsinspelning och en referensinspelning, fokuserat på jazz och klassisk musik. De tekniska aspekterna inkluderade en jämförelse mellan djupinlärningsbaserade signalrepresentationer och traditionella manuellt anpassade funktioner i ljudmatchningsuppgiften. Experiment utfördes på ett pianodataset med en välkänd ljudjusterings algoritm, anpassad för ljudmatchning. Datasetet hade manuellt annoterade taktpositioner som användes för utvärdering. Vi fann att de traditionella funktionerna var överlägsna jämfört med djupinlärningsbaserade signalrepresentationer när de användes ensamma. Vi fann också att djupinlärningsbaserade-baserade signalrepresentationer, ensamma, var otillräckliga för våra testfall. Dock upptäckte vi att de djupinlärningsbaserade representationerna innehöll värdefull information. Flera testfall visade att kombinationen av djupinlärnings-representationer och traditionella representationer överträffade alla andra övervägda metoder. Test med midi-renderade inspelningar som referenser visade en svag, men insignifikant förbättring i prestanda. Slutligen implementerades en prototyp av övningsverktyget som en webbplats, med en traditionell signalrepresentation som inmatning till matchningsalgoritmen.
115

Massive galaxies at high redshift

Pearce, Henry James January 2012 (has links)
A unique K-band selected high-redshift spectroscopic dataset (UDSz) is exploited to gain further understanding of galaxy evolution at z > 1. Acquired as part of an ESO Large Programme, this thesis presents the reduction and analysis of a sample of ∼ 450 deep optical spectra of a random 1 in 6 sample of the KAB < 23, z > 1 galaxy population. Based on the final reduced dataset, spectrophotometric modelling of the optical spectra and multi-wavelength photometry available for each galaxy is performed using a combination of single and dual component stellar population models. The stellarmass and age estimates provided by the spectrophotometric modelling are exploited throughout the rest of the thesis to investigate the evolution of massive galaxies at z > 1. Focusing on a K-band bright (K < 21.5) sub-sample in the redshift range 1.3 < z < 1.5 the galaxy size-mass relation has been studied in detailed. In agreement with some previous studies it is found that massive, old, early-type galaxies (ETGs) have characteristic radii a factor ~- 1.5 − 3.0 smaller than their local counterparts at a given stellar-mass. Due to the potential errors in spectrophotometric estimates of the stellarmasses at high redshift velocity dispersion measurements are derived for a sub-sample of massive ETGs at z > 1.3 in order to calculate dynamical mass estimates. To date, only a handful of objects at z > 1.3 have individual velocity dispersion estimates in the literature. Here the largest single sample (13 objects) of velocity dispersion measurements at high redshift is presented. The results for the sub-sample of objects with dynamical mass estimates confirm the results based on stellar mass estimates that high redshift massive systems are more compact than their local counterparts. The fraction of K-band bright objects at high redshift that are passively evolving is calculated with specific star-formation rates from the UV rest-frame continuum, [OII] emission and 24μm data. It is concluded that ∼ 58 ± 10% of the K < 21.5, 1.3 < z < 1.5 galaxy population is passively evolving. Various photometric techniques for separating star-forming and passively evolving galaxies are assessed by exploiting the accurate spectral types derived for the UDSz spectroscopic sample. Popular highredshift selection techniques are shown to fail to effectively select complete samples of passive objects with low levels of contamination. Using detailed information available for the UDSz dataset, various techniques are optimised and then used to estimate the passive fraction from the full UDS photometric catalog. The passive fraction results from the full photometric catalog are found to agree well with the results derived from the UDSz sample. With the Visible and Infrared Survey Telescope for Astronomy (VISTA) now starting to produce data, the opportunity has been taken to develop high-redshift galaxy population dividers based on the VISTA filters. Using the first data release from the VISTA Deep Extragalactic Observations (VIDEO) survey (VVDS D1 field), the passive fractions of K-band limited samples have been estimated to compare with results derived in the UDS. Within the errors the passive fraction estimates in the UDS and VISTA VVDS D1 field are found to agree reasonably well. Finally, composite spectra are used to study the evolution of various different galaxy sub-samples as a function of redshift, age, stellar-mass and specific star-formation rate. This work produces an remarkably clean result, showing that the massive, absolute Kband bright, passively evolving ETGs are always the oldest population, with ages close to the age of the Universe at z ∼ 1.4. In contrast, the late-type, low-mass, star-forming galaxies are always found to be much younger systems. This result strongly supports the downsizing scenario, in which more massive systems complete their stellar-mass assembly before lower-mass counterparts.
116

Navigating campus: a geospatial approach to 3-D routing

Jenkins, Jacob Luke January 1900 (has links)
Master of Landscape Architecture / Department of Landscape Architecture/Regional and Community Planning / Howard Hahn / Evolving needs for universities, municipalities, and corporations demand more sustainable and efficient techniques for data management. Geographic Information Systems (GIS) enables decision makers to spatially analyze the built environment to better understand facility usage by running test scenarios to evaluate current efficiencies and identify opportunities for investment. This can only be conducted when data is organized and leveraged across many departments in a collaborative environment. Data organization through GIS encourages interdepartmental collaboration uniting all efforts on a common front. An organized system facilitates a working relationship between the university and the community of Manhattan increasing efficiency, developing sustainable practices, and enhancing the health and safety of Kansas State University and larger community. Efficiency is increased through automation of many current practices such as work requests and routine maintenance. Sustainable practices will be developed by generating self-guided campus tours and identifying area appropriate for bioswales. Lastly, safety will be enhanced throughout campus by increasing emergency response access, determining areas within buildings difficult to reach in emergency situations, and identifying unsafe areas on campus. Evolving needs for universities, municipalities, and corporations demand more sustainable and efficient techniques for data management. Geographic Information Systems (GIS) enables decision makers to spatially analyze the built environment to better understand facility usage by running test scenarios to evaluate current efficiencies and identify opportunities for investment. This can only be conducted when data is organized and leveraged across many departments in a collaborative environment. Data organization through GIS encourages interdepartmental collaboration uniting all efforts on a common front. An organized system facilitates a working relationship between the university and the community of Manhattan increasing efficiency, developing sustainable practices, and enhancing the health and safety of Kansas State University and larger community. Efficiency is increased through automation of many current practices such as work requests and routine maintenance. Sustainable practices will be developed by generating self-guided campus tours and identifying area appropriate for bioswales. Lastly, safety will be enhanced throughout campus by increasing emergency response access, determining areas within buildings difficult to reach in emergency situations, and identifying unsafe areas on campus. Optimizing data management for Kansas State University was conducted in three phases. First, a baseline assessment for facility management at Kansas State University was conducted through discussions with campus departments. Second, case study interviews and research was conducted with leaders in GIS management. Third, practices for geospatial data management were adapted and implemented for Kansas State University: the building of a centralized database, constructing a 3-dimensional routing network, and modeling a virtual campus in 3D.
117

Context-aware worker selection for efficient quality control in crowdsourcing / Sélection des travailleurs attentifs au contexte pour un contrôle efficace de la qualité en externalisation à grande échelle

Awwad, Tarek 13 December 2018 (has links)
Le crowdsourcing est une technique qui permet de recueillir une large quantité de données d'une manière rapide et peu onéreuse. Néanmoins, La disparité comportementale et de performances des "workers" d’une part et la variété en termes de contenu et de présentation des tâches par ailleurs influent considérablement sur la qualité des contributions recueillies. Par conséquent, garder leur légitimité impose aux plateformes de crowdsourcing de se doter de mécanismes permettant l’obtention de réponses fiables et de qualité dans un délai et avec un budget optimisé. Dans cette thèse, nous proposons CAWS (Context AwareWorker Selection), une méthode de contrôle de la qualité des contributions dans le crowdsourcing visant à optimiser le délai de réponse et le coût des campagnes. CAWS se compose de deux phases, une phase d’apprentissage opérant hors-ligne et pendant laquelle les tâches de l’historique sont regroupées de manière homogène sous forme de clusters. Pour chaque cluster, un profil type optimisant la qualité des réponses aux tâches le composant, est inféré ; la seconde phase permet à l’arrivée d’une nouvelle tâche de sélectionner les meilleurs workers connectés pour y répondre. Il s’agit des workers dont le profil présente une forte similarité avec le profil type du cluster de tâches, duquel la tâche nouvellement créée est la plus proche. La seconde contribution de la thèse est de proposer un jeu de données, appelé CrowdED (Crowdsourcing Evaluation Dataset), ayant les propriétés requises pour, d’une part, tester les performances de CAWS et les comparer aux méthodes concurrentes et d’autre part, pour tester et comparer l’impact des différentes méthodes de catégorisation des tâches de l’historique (c-à-d, la méthode de vectorisation et l’algorithme de clustering utilisé) sur la qualité du résultat, tout en utilisant un jeu de tâches unique (obtenu par échantillonnage), respectant les contraintes budgétaires et gardant les propriétés de validité en terme de dimension. En outre, CrowdED rend possible la comparaison de méthodes de contrôle de qualité quelle que soient leurs catégories, du fait du respect d’un cahier des charges lors de sa constitution. Les résultats de l’évaluation de CAWS en utilisant CrowdED comparés aux méthodes concurrentes basées sur la sélection de workers, donnent des résultats meilleurs, surtout en cas de contraintes temporelles et budgétaires fortes. Les expérimentations réalisées avec un historique structuré en catégories donnent des résultats comparables à des jeux de données où les taches sont volontairement regroupées de manière homogène. La dernière contribution de la thèse est un outil appelé CREX (CReate Enrich eXtend) dont le rôle est de permettre la création, l’extension ou l’enrichissement de jeux de données destinés à tester des méthodes de crowdsourcing. Il propose des modules extensibles de vectorisation, de clusterisation et d’échantillonnages et permet une génération automatique d’une campagne de crowdsourcing. / Crowdsourcing has proved its ability to address large scale data collection tasks at a low cost and in a short time. However, due to the dependence on unknown workers, the quality of the crowdsourcing process is questionable and must be controlled. Indeed, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to this quality control to stay low. Current quality control techniques suffer from high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address these limitation by proposing the CAWS (Context-Aware Worker Selection) method which operates in two phases: in an offline phase, the correlations between the worker declarative profiles and the task types are learned. Then, in an online phase, the learned profile models are used to select the most reliable online workers for the incoming tasks depending on their types. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. In order to evaluate CAWS, we introduce an information-rich dataset called CrowdED (Crowdsourcing Evaluation Dataset). The generation of CrowdED relies on a constrained sampling approach that allows to produce a dataset which respects the requester budget and type constraints. Through its generality and richness, CrowdED helps also in plugging the benchmarking gap present in the crowdsourcing community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality, the time and the budget gain. Results shows that automatic grouping is able to achieve a learning quality similar to job-based grouping, and that CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality, especially when strong budget ant time constraints exist. Finally, we propose CREX (CReate Enrich eXtend) which provides the tools to select and sample input tasks and to automatically generate custom crowdsourcing campaign sites in order to extend and enrich CrowdED.
118

Archaeomagnetic field intensity evolution during the last two millennia / Evolução da intensidade do campo arqueomagnético durante os últimos dois milênios

Silva, Wilbor Poletti 14 September 2018 (has links)
Temporal variations of Earth\'s magnetic field provide a great range of geophysical information about the dynamics at different layers of the Earth. Since it is a planetary field, regional and global aspects can be explored, depending on the timescale of variations. In this thesis, the geomagnetic field variations for the last two millennia were investigated. For that, some improvement on the methods to recover the ancient magnetic field intensity from archeological material were done, new data was acquired and a critical assessment of the global archaeomagnetic database was performed. Two methodological advances are reported, comprising: i) the correction for microwave method of the cooling rate effect, which is associated to the difference between the cooling times during the manufactory of the material and that of the heating steps during the archaeointensity experiment; (ii) a test for thermoremanent anisotropy correction from the arithmetic mean of six orthogonal samples. The temporal variation of the magnetic intensity for South America was investigated from nine new data, three from ruins of the Guaraní Jesuit Missions and six from archaeological sites associated with jerky beef farms, both located in Rio Grande do Sul, Brazil, with ages covering the last 400 years. These data combined with the regional archaeointensity database, demonstrates that the influence of significant non-dipole components in South America started at ~1800 CE. Finally, from a reassessment of the global archaeointensity database, a new interpretation was proposed about the geomagnetic axial dipole evolution, where this component falls constantly since ~700 CE associated to the breaking of the symmetry of the advective sources operating in the outer core. / Variações temporais do campo magnético da Terra fornecem uma grande diversidade de informações geofísicas sobre a dinâmica das diferentes camadas da Terra. Por ser um campo planetário, aspectos regionais e globais podem ser explorados, dependendo da escala de tempo das variações. Nesta tese, foram investigadas as variações do campo geomagnético para os dois últimos milênios. Para isso, aprimoramentos nos métodos de aquisição da intensidade geomagnética registrada em materiais arqueológicos foram realizados, bem como a aquisição de novos dados e uma avaliação crítica da base de dados arqueomagnética global. Dois novos avanços metodológicos são aqui propostos, sendo eles: i) correção para o método de micro-ondas do efeito da taxa de resfriamento, que está associada à diferença entre os tempos de resfriamento durante a manufatura do material e o das etapas de aquecimento durante o experimento de arqueointensidade; (ii) teste para correção da anisotropia termorremanente a partir da média aritmética de seis amostras posicionadas ortogonalmente umas às outras durante o experimento de arqueointensidade. A variação temporal da intensidade magnética para a América do Sul foi investigada a partir de nove dados inéditos, sendo três provenientes das ruínas das Missões Jesuíticas Guaraníticas e seis de sítios arqueológicos associados a fazendas de charque, ambos localizados no Rio Grande do Sul, Brasil, com idades que cobrem os últimos 400 anos. Esses dados, combinados com o banco de dados regionais de arqueointensidade, demonstram que a influência significativa de componentes não-dipolares do campo magnético na América do Sul começou em ~1800 CE. Finalmente, a partir de uma reavaliação do banco de dados globais de arqueointensidade uma nova interpretação foi proposta a respeito da evolução do dipolo axial geomagnético, sugerindo que essa componente está decrescendo constantemente desde ~700 CE devido à quebra da simetria das fontes advectivas que operam no núcleo externo.
119

Archaeomagnetic field intensity evolution during the last two millennia / Evolução da intensidade do campo arqueomagnético durante os últimos dois milênios

Wilbor Poletti Silva 14 September 2018 (has links)
Temporal variations of Earth\'s magnetic field provide a great range of geophysical information about the dynamics at different layers of the Earth. Since it is a planetary field, regional and global aspects can be explored, depending on the timescale of variations. In this thesis, the geomagnetic field variations for the last two millennia were investigated. For that, some improvement on the methods to recover the ancient magnetic field intensity from archeological material were done, new data was acquired and a critical assessment of the global archaeomagnetic database was performed. Two methodological advances are reported, comprising: i) the correction for microwave method of the cooling rate effect, which is associated to the difference between the cooling times during the manufactory of the material and that of the heating steps during the archaeointensity experiment; (ii) a test for thermoremanent anisotropy correction from the arithmetic mean of six orthogonal samples. The temporal variation of the magnetic intensity for South America was investigated from nine new data, three from ruins of the Guaraní Jesuit Missions and six from archaeological sites associated with jerky beef farms, both located in Rio Grande do Sul, Brazil, with ages covering the last 400 years. These data combined with the regional archaeointensity database, demonstrates that the influence of significant non-dipole components in South America started at ~1800 CE. Finally, from a reassessment of the global archaeointensity database, a new interpretation was proposed about the geomagnetic axial dipole evolution, where this component falls constantly since ~700 CE associated to the breaking of the symmetry of the advective sources operating in the outer core. / Variações temporais do campo magnético da Terra fornecem uma grande diversidade de informações geofísicas sobre a dinâmica das diferentes camadas da Terra. Por ser um campo planetário, aspectos regionais e globais podem ser explorados, dependendo da escala de tempo das variações. Nesta tese, foram investigadas as variações do campo geomagnético para os dois últimos milênios. Para isso, aprimoramentos nos métodos de aquisição da intensidade geomagnética registrada em materiais arqueológicos foram realizados, bem como a aquisição de novos dados e uma avaliação crítica da base de dados arqueomagnética global. Dois novos avanços metodológicos são aqui propostos, sendo eles: i) correção para o método de micro-ondas do efeito da taxa de resfriamento, que está associada à diferença entre os tempos de resfriamento durante a manufatura do material e o das etapas de aquecimento durante o experimento de arqueointensidade; (ii) teste para correção da anisotropia termorremanente a partir da média aritmética de seis amostras posicionadas ortogonalmente umas às outras durante o experimento de arqueointensidade. A variação temporal da intensidade magnética para a América do Sul foi investigada a partir de nove dados inéditos, sendo três provenientes das ruínas das Missões Jesuíticas Guaraníticas e seis de sítios arqueológicos associados a fazendas de charque, ambos localizados no Rio Grande do Sul, Brasil, com idades que cobrem os últimos 400 anos. Esses dados, combinados com o banco de dados regionais de arqueointensidade, demonstram que a influência significativa de componentes não-dipolares do campo magnético na América do Sul começou em ~1800 CE. Finalmente, a partir de uma reavaliação do banco de dados globais de arqueointensidade uma nova interpretação foi proposta a respeito da evolução do dipolo axial geomagnético, sugerindo que essa componente está decrescendo constantemente desde ~700 CE devido à quebra da simetria das fontes advectivas que operam no núcleo externo.
120

Deep spiking neural networks

Liu, Qian January 2018 (has links)
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.

Page generated in 0.2574 seconds