Spelling suggestions: "subject:"component 2analysis"" "subject:"component 3analysis""
831 |
The linguistic and cognitive mechanisms underlying language tests in healthy adults : a principal component analysisBresolin Goncalves, Ana Paula 04 1900 (has links)
Pour un processus d’évaluation linguistique plus précis et rapide, il est important
d’identifier les mécanismes cognitifs qui soutiennent des tâches langagières couramment
utilisées. Une façon de mieux comprendre ses mécanismes est d’explorer la variance
partagée entre les tâches linguistiques en utilisant l’analyse factorielle exploratoire. Peu
d’études ont employé cette méthode pour étudier ces mécanismes dans le
fonctionnement normal du langage. Par conséquent, notre objectif principal est
d’explorer comment un ensemble de tâches linguistiques se regroupent afin d’étudier les
mécanismes cognitifs sous-jacents de ses tâches. Nous avons évalué 201 participants en
bonne santé âgés entre 18 et 75 ans (moyenne=45,29, écart-type= 15,06) et avec une
scolarité entre 5 et 23 ans (moyenne=11,10, écart-type=4,68), parmi ceux-ci, 62,87%
étaient des femmes. Nous avons employé deux batteries linguistiques : le Protocole
d’examen linguistique de l’aphasie Montréal-Toulouse et Protocole Montréal d’Évaluation
de la Communication – version abrégé. Utilisant l’analyse en composantes principales
avec une rotation Direct-oblimin, nous avons découvert quatre composantes du langage :
la sémantique picturale (tâches de compréhension orale, dénomination orale et
dénomination écrite), l'exécutif linguistique (tâches d’évocation lexicale - critères
sémantique, orthographique et libre), le transcodage et la sémantique (tâches de lecture,
dictée et de jugement sémantique) et la pragmatique (tâches d'interprétation d'actes de
parole indirecte et d'interprétation de métaphores). Ces quatre composantes expliquent
59,64 % de la variance totale. Deuxièmement, nous avons vérifié l'association entre ces
composantes et deux mesures des fonctions exécutives dans un sous-ensemble de 33
participants. La performance de la flexibilité cognitive a été évaluée en soustrayant le -
temps A au temps B du Trail Making Test et celle de la mémoire de travail en prenant le
total des réponses correctes au test du n-back. La composante exécutive linguistique était
associée à une meilleure flexibilité cognitive (r=-0,355) et la composante transcodage et
sémantique à une meilleure performance de mémoire de travail (r=.0,397). Nos résultats
confirment l’hétérogénéité des processus sous-jacent aux tâches langagières et leur
relation intrinsèque avec d'autres composantes cognitives, tels que les fonctions
exécutives. / To a more accurate and time-efficient language assessment process, it is important
to identify the cognitive mechanisms that sustain commonly used language tasks. One
way to do so is to explore the shared variance across language tasks using the technique
of principal components analysis. Few studies applied this technique to investigate these
mechanisms in normal language functioning. Therefore, our main goal is to explore how
a set of language tasks are going to group to investigate the underlying cognitive
mechanisms of commonly used tasks. We assessed 201 healthy participants aged
between 18 and 75 years old (mean = 45.29, SD = 15.06) and with a formal education
between 5 and 23 years (mean = 11.10, SD =4.68), of these 62.87% were female. We used
two language batteries: the Montreal-Toulouse language assessment battery and the
Montreal Communication Evaluation Battery – brief version. Using a Principal Component
Analysis with a Direct-oblimin rotation, we discovered four language components:
pictorial semantics (auditory comprehension, naming and writing naming tasks),
language-executive (unconstrained, semantic, and phonological verbal fluency tasks),
transcoding and semantics (reading, dictation, and semantic judgment tasks), and
pragmatics (indirect speech acts interpretation and metaphors interpretation tasks).
These four components explained 59.64% of the total variance. Secondarily, we sought to
verify the association between these components with two executive measures in a subset
of 33 participants. Cognitive flexibility was assessed by the time B-time A score of the Trail
Making Test and working memory by the total of correct answers on the n-back test. The
language-executive component was associated with a better cognitive flexibility score
(r=-.355) and the transcoding and semantics one with a better working memory
performance (r=.397). Our findings confirm the heterogeneity process underlying
language tasks and their intrinsic relationship to other cognitive components, such as
executive functions.
|
832 |
Market Surveillance Using Empirical Quantile Model and Machine Learning / Marknadsövervakning med hjälp av empirisk kvantilmodell och maskininlärningLandberg, Daniel January 2022 (has links)
In recent years, financial trading has become more available. This has led to more market participants and more trades taking place each day. The increased activity also implies an increasing number of abusive trades. To detect the abusive trades, market surveillance systems are developed and used. In this thesis, two different methods were tested to detect these abusive trades on high-dimensional data. One was based on empirical quantiles, and the other was based on an unsupervised machine learning technique called isolation forest. The empirical quantile method uses empirical quantiles on dimensionally reduced data to determine if a datapoint is an outlier or not. Principal Component Analysis (PCA) is used to reduce the dimensionality of the data and handle the correlation between features.Isolation forest is a machine learning method that detects outliers by sorting each datapoint in a tree structure. If a datapoint is close to the root, it is more likely to be an outlier. Isolation forest have been proven to detect outliers in high-dimensional datasets successfully, but have not been tested before for market surveillance. The performance of both the quantile method and isolation forest was tested by using recall and run-time. The conclusion was that the empirical quantile method did not detect outliers accurately when all dimensions of the data were used. The method most likely suffered from the curse of dimensionality and could not handle high dimensional data. However, the performance increased when the dimensionality was reduced. Isolation forest performed better than the empirical quantile method and detected 99% of all outliers by classifying 226 datapoints as outliers out of a dataset with 184 true outliers and 1882 datapoints. / Under de senaste åren har finansiell handel blivit mer tillgänglig för allmänheten. Detta har lett till fler deltagare på marknaderna och att fler affärer sker varje dag. Den ökade aktiviteten innebär också att de missbruk som förekommer ökar. För att upptäcka otillåtna affärer utvecklas och används marknadsövervakningssystem. I den här avhandlingen testades två olika metoder för att upptäcka dessa missbruk utifrån högdimensionell data. Den ena baserades på empiriska kvantiler och den andra baserades på en oövervakad maskininlärningsteknik som kallas isolationsskog. Den empiriska kvantilmetoden använder empiriska kvantiler på dimensionellt reducerad data för att avgöra om en datapunkt är ett extremvärde eller inte. För att reducera dimensionen av datan, och för att hantera korrelationen mellan variabler, används huvudkomponent analys (HKA).Isolationsskog är en maskininlärnings metod som upptäcker extremvärden genom att sortera varje datapunkt i en trädstruktur. Om en datapunkt är nära roten är det mer sannolikt att det är en extremvärde. Isolationsskog har visat sig framgångsrikt upptäcka extremvärden i högdimensionella datauppsättningar, men har inte testats för marknadsövervakning tidigare. För att mäta prestanda för båda metoderna användes recall och körtid. Slutsatsen är att den empiriska kvantilmetoden inte hittade extremvärden när alla dimensioner av datan användes. Metoden led med största sannolikhet av dimensionalitetens förbannelse och kunde inte hantera högdimensionell data, men när dimensionaliteten reducerades ökade prestandan. Isolationsskog presterade bättre än den empiriska kvantilmetoden och lyckades detektera 99% av alla extremvärden genom att klassificera 226 datapunkter som extremvärden ur ett dataset med 184 verkliga extremvärden och 1882 datapunkter.
|
833 |
Ecosystem services in a rural landscape of southwest OhioLin, Meimei 10 December 2012 (has links)
No description available.
|
834 |
Learning Latent Temporal Manifolds for Recognition and Prediction of Multiple Actions in Streaming Videos using Deep NetworksNair, Binu Muraleedharan 03 June 2015 (has links)
No description available.
|
835 |
[pt] EFEITO DAS INTERVENÇÕES DO BCB NA CURVA DE CUPOM CAMBIAL / [en] THE EFFECT OF BRAZIL CENTRAL BANK S INTERVENTIONS ON THE CUPOM CAMBIAL CURVEVICTOR AUGUSTO MESQUITA CRAVEIRO 05 February 2020 (has links)
[pt] Neste estudo, tentamos estimar o impacto da medida intervencionista mais recente e amplamente adotada pelo Banco Central do Brasil no mercado de câmbio sobre a Curva de Cupom Cambial: a emissão de Swaps Cambiais. O objetivo do BCB com essa intervenção era prover o setor privado de proteção contra a volatilidade cambial à época. O trabalho foca no efeito dessas medidas na curva de Cupom Cambial por conta da importância do funcionamento dessa curva para a correta precificação do mercado de dólar futuro, já que, no Brasil, a formação da taxa de câmbio se dá no preço futuro de dólar e não no preço à vista, como é comum nos outros países. Através de Análise de Componentes Principais sobre a Curva de Cupom Cambial, encontramos seus três primeiros componentes (nível, inclinação e curvatura) e os utilizamos para regredi-los em variáveis independentes que representam a série de emissões de Swap por parte do BC. Os resultados indicam que os Swaps Cambiais geram mudanças significativas no nível geral da Curva de Cupom Cambial. Já os Swaps Reversos não apresentam impacto estatisticamente significante no nível, mas sim na inclinação da curva. / [en] In this study, we try to estimate the impact of the most recent currency intervention measure widely adopted by the Central Bank of Brazil and how it affects the Cupom Cambial Curve: the issue of Foreign Exchange Swaps. The BCB s objective with this intervention was to provide the private sector with hedge against exchange rate volatility. This paper focus on the effect of these measures on the Cupom Curve due to the importance of the comprehension of this curve for the correct pricing of the future dollar market, given that, in Brazil, the formation of the foreign exchange rate occurs with the future dollar price and not in the spot price, as is more common in other countries. Through Principal Component Analysis on the Foreign Exchange Coupon Curve, we find its three components (level, slope and curvature) and use it as an explained variable to regress it with independent variables that represent the series of Swap issued by the Central Bank. The results indicate that the Foreign Exchange Swaps generate significant changes in the overall level of the Cupom Cambial Curve. Otherwise, Reverse Swaps don t represent a statistically significant impact on the level but do impact the slope of the curve.
|
836 |
Identifying factors that correlate with Enhanced Biological Phosphorus Removal upsets at Lundåkraverket / Undersökning av faktorer som påverkar biologisk fosforavskiljning vid LundåkraverketNiranjan, Rounak January 2021 (has links)
The Enhanced Biological Phosphorus Removal (EBPR) process is characterized as the most sustainable process to remove phosphorus from wastewater albeit with high variability in performance efficiency. Thus, unpredictable upsets in the EBPR system is the norm across several wastewater treatment plants throughout Sweden, forcing the hand of the operators to dose higher volume of chemicals to reach the effluent requirements. As future effluent requirements are getting stricter and since higher chemical usage is environmentally and economically unsustainable, this investigation was setup to evaluate which environmental, operational and/or wastewater characteristics correlate with EBPR upsets at full-scale wastewater treatment plant (WWTP), more specifically at Lundåkra WWTP operated by Nordvästra Skånes Vatten och Avlopp (NSVA). The data used in the investigation was collected between 1St January 2018 and 31St December 2020 for a vast number of parameters known to play a key role in biological phosphorus removal. Online sensors as well as external and internal analysis contributed to the data which included parameters such as ‘Total flow at the plant’, ‘pH of the incoming water’, ‘Temperature in aeration basins’, ‘Dissolved oxygen (DO) levels in aeration basins’, ‘Nitrate in aeration basins’, ‘Sludge content in aeration basins’, etc. Other relevant parameters such as ‘Hydraulic retention time (HRT) in the treatment units’, ‘Sludge retention time (SRT) in aeration basin’, ‘Organic loading rate (OLR)’, etc. were calculated. Before the start of this investigation, the two possible explanations were presumed and they can be classified as: (i) upsets as a result of unsuitable environmental conditions and/or error in the operational strategy at the plant and (ii) upsets as a result of toxicity from higher concentration of metals in the influent specifically. Traditional statistical methods such as the t-distributed Stochastic Neighbor Embedding (t-SNE), Spearman Rank Correlation and Principal Component Analysis were used for the purpose of this study to test the first presumed explanation. The t-SNE plot showed that the upsets did not cluster into one large group but instead clamped up into smaller groups scattered across the length of the scale in both dimensions. This points towards the multivariate dependency of the EBPR process and exhibits that upsets might occur even with an operational strategy that produces good results otherwise. This, in turn, eludes to the fact that a non-included parameter such as the ‘daily metal concentrations in the influent’ could be responsible for some or all of the upsets. The Principal Component Analysis (PCA) plot, although noisy, offered an improvement strategy built around the key variables namely ‘nitrate in aeration basin 1 & 2’, ‘sludge content in aeration basin’, ‘SRT in aeration basin’, ‘O2 in aeration basin 1 & 2’ and ‘pH of incoming water’. Therefore, it is recommended that an improvement strategy be devised around them. Multiple causal factors increase the complexity of the analysis by decreasing the correlation coefficients, however, incorporation of the scatterplots presents a clearer picture. The parameters ‘nitrate in aeration basin 1 & 2’ and ‘sludge content in aeration basin’ showed the strongest correlation with phosphate values at the end of biological treatment at -0.32 and 0.42 respectively. The results also open the door to future research and provide direction for further investigations. / Den förbättrade biologiska fosforborttagningsprocessen karakteriseras som den mest hållbara processen för att avlägsna fosfor från avloppsvatten om än med stor variation i prestandaeffektivitet. Således är oförutsägbara störningar i systemet för förbättrad biologisk fosforavskiljning (EBPR) normen bland flera avloppsreningsverk i hela Sverige, vilket tvingar operatörerna att dosera högre volymer kemikalier för att nå avloppskraven. Eftersom framtida avloppskrav blir allt strängare och eftersom högre kemikalieanvändning är miljömässigt och ekonomiskt ohållbar, gjordes denna undersökning för att utvärdera vilka miljö-, drifts- och/eller avloppsvattenegenskaper som korrelerar med EBPR- störningar vid fullskaligt avloppsreningsverk. Närmare bestämt vid Lundåkra reningsverk som drivs av Nordvästra Skånes Vatten och Avlopp. Datan som användes i undersökningen samlades in mellan 1:a januari 2018 och 31:a december 2020 för ett fast antal parametrar som är kända att spela en nyckelroll vid borttagning av biologiskt P. Onlinesensorer samt externa och interna analyser bidrog till datan vilken inkluderade parametrar som 'Totalt flöde vid anläggningen', 'pH för det inkommande vattnet', 'Temperatur i luftningsbassänger', nivåer av upplöst syre (DO) i luftningsbassänger ',' Nitrat i luftningsbassänger ',' Slamhalt i luftningsbassänger ', etc. Andra relevanta parametrar som 'Hydraulisk retentionstid (HRT) i behandlingsenheterna ',' Slamretentionstid (SRT) i luftningsbassäng ',' Organisk belastningshastighet (OLR) ', etc. beräknades. Innan denna undersökning påbjörades antogs de två möjliga förklaringarna och de kan klassificeras som: (i) störningar till följd av olämpliga miljöförhållanden och/eller fel i driftstrategin vid anläggningen och (ii) störningar till följd av toxicitet från högre koncentration av metaller i inflödet specifikt. Traditionella statistiska metoder såsom t- Distributed Stochastic Neighbor Embedding (t-SNE), Spearman Rank Correlation och Principal Component Analysis användes i denna studie för att testa den första förmodade förklaringen. t-SNE- diagrammet visade att störningarna inte samlades i en stor grupp utan istället klämdes ihop i mindre grupper utspridda över skalans längd i båda dimensionerna. Detta pekar mot EBPR-processens multivariata beroende och visar att störningar kan uppstå även med en operativ strategi som annars ger bra resultat. Detta i sin tur undviker det faktum att en icke-inkluderad parameter som "dagliga metallkoncentrationer i inflödet" kan vara orsaken för några eller alla störningar. Principal Component Analysis (PCA)-diagrammet, trots att det var bullrigt, möjliggjorde en förbättringsstrategi byggd kring nyckelvariablerna, nämligen 'nitrat i luftningsbassäng 1 & 2', 'slamhalt i luftningsbassäng', 'SRT i luftningsbassäng', 'O2 i luftningsbassäng 1 & 2' och 'pH av inkommande vatten'. Därför rekommenderas att en förbättringsstrategi utarbetas kring dem. Flera kausala faktorer ökar komplexiteten i analysen genom att minska korrelationskoefficienterna, men spridningsdiagrammen ger en tydligare bild. Parametrarna ‘nitrat i luftningsbassäng 1 & 2’ och ‘slamhalt i luftningsbassäng’ visade starkast samband med fosfatvärden vid slutet av biologisk behandling vid -0,32 respektive 0,42. Resultaten lämnar dörren öppen för framtida forskning och kan vägleda vidare undersökningar.
|
837 |
Nuevas contribuciones a la teoría y aplicación del procesado de señal sobre grafosBelda Valls, Jordi 16 January 2023 (has links)
[ES] El procesado de señal sobre grafos es un campo emergente de técnicas que combinan conceptos de dos áreas muy consolidadas: el procesado de señal y la teoría de grafos. Desde la perspectiva del procesado de señal puede obtenerse una definición de la señal mucho más general asignando cada valor de la misma a un vértice de un grafo. Las señales convencionales pueden considerarse casos particulares en los que los valores de cada muestra se asignan a una cuadrícula uniforme (temporal o espacial). Desde la perspectiva de la teoría de grafos, se pueden definir nuevas transformaciones del grafo de forma que se extiendan los conceptos clásicos del procesado de la señal como el filtrado, la predicción y el análisis espectral. Además, el procesado de señales sobre grafos está encontrando nuevas aplicaciones en las áreas de detección y clasificación debido a su flexibilidad para modelar dependencias generales entre variables.
En esta tesis se realizan nuevas contribuciones al procesado de señales sobre grafos. En primer lugar, se plantea el problema de estimación de la matriz Laplaciana asociada a un grafo, que determina la relación entre nodos. Los métodos convencionales se basan en la matriz de precisión, donde se asume implícitamente Gaussianidad. En esta tesis se proponen nuevos métodos para estimar la matriz Laplaciana a partir de las correlaciones parciales asumiendo respectivamente dos modelos no Gaussianos diferentes en el espacio de las observaciones: mezclas gaussianas y análisis de componentes independientes. Los métodos propuestos han sido probados con datos simulados y con datos reales en algunas aplicaciones biomédicas seleccionadas. Se demuestra que pueden obtenerse mejores estimaciones de la matriz Laplaciana con los nuevos métodos propuestos en los casos en que la Gaussianidad no es una suposición correcta.
También se ha considerado la generación de señales sintéticas en escenarios donde la escasez de señales reales puede ser un problema. Los modelos sobre grafos permiten modelos de dependencia por pares más generales entre muestras de señal. Así, se propone un nuevo método basado en la Transformada de Fourier Compleja sobre Grafos y en el concepto de subrogación. Se ha aplicado en el desafiante problema del reconocimiento de gestos con las manos. Se ha demostrado que la extensión del conjunto de entrenamiento original con réplicas sustitutas generadas con los métodos sobre grafos, mejora significativamente la precisión del clasificador de gestos con las manos. / [CAT] El processament de senyal sobre grafs és un camp emergent de tècniques que combinen conceptes de dues àrees molt consolidades: el processament de senyal i la teoria de grafs. Des de la perspectiva del processament de senyal pot obtindre's una definició del senyal molt més general assignant cada valor de la mateixa a un vèrtex d'un graf. Els senyals convencionals poden considerar-se casos particulars en els quals els valors de la mostra s'assignen a una quadrícula uniforme (temporal o espacial). Des de la perspectiva de la teoria de grafs, es poden definir noves transformacions del graf de manera que s'estenguen els conceptes clàssics del processament del senyal com el filtrat, la predicció i l'anàlisi espectral. A més, el processament de senyals sobre grafs està trobant noves aplicacions en les àrees de detecció i classificació a causa de la seua flexibilitat per a modelar dependències generals entre variables.
En aquesta tesi es donen noves contribucions al processament de senyals sobre grafs. En primer lloc, es planteja el problema d'estimació de la matriu Laplaciana associada a un graf, que determina la relació entre nodes. Els mètodes convencionals es basen en la matriu de precisió, on s'assumeix implícitament la gaussianitat. En aquesta tesi es proposen nous mètodes per a estimar la matriu Laplaciana a partir de les correlacions parcials assumint respectivament dos models no gaussians diferents en l'espai d'observació: mescles gaussianes i anàlisis de components independents. Els mètodes proposats han sigut provats amb dades simulades i amb dades reals en algunes aplicacions biomèdiques seleccionades. Es demostra que poden obtindre's millors estimacions de la matriu Laplaciana amb els nous mètodes proposats en els casos en què la gaussianitat no és una suposició correcta.
També s'ha considerat el problema de generar senyals sintètics en escenaris on l'escassetat de senyals reals pot ser un problema. Els models sobre grafs permeten models de dependència per parells més generals entre mostres de senyal. Així, es proposa un nou mètode basat en la Transformada de Fourier Complexa sobre Grafs i en el concepte de subrogació. S'ha aplicat en el desafiador problema del reconeixement de gestos amb les mans. S'ha demostrat que l'extensió del conjunt d'entrenament original amb rèpliques substitutes generades amb mètodes sobre grafs, millora significativament la precisió del classificador de gestos amb les mans. / [EN] Graph signal processing appears as an emerging field of techniques that combine concepts from two highly consolidated areas: signal processing and graph theory. From the perspective of signal processing, it is possible to achieve a more general signal definition by assigning each value of the signal to a vertex of a graph. Conventional signals can be considered particular cases where the sample values are assigned to a uniform (temporal or spatial) grid. From the perspective of graph theory, new transformations of the graph can be defined in such a way that they extend the classical concepts of signal processing such as filtering, prediction and spectral analysis. Furthermore, graph signal processing is finding new applications in detection and classification areas due to its flexibility to model general dependencies between variables.
In this thesis, new contributions are given to graph signal processing. Firstly, it is considered the problem of estimating the Laplacian matrix associated with a graph, which determines the relationship between nodes. Conventional methods are based on the precision matrix, where Gaussianity is implicitly assumed. In this thesis, new methods to estimate the Laplacian matrix from the partial correlations are proposed respectively assuming two different non-Gaussian models in the observation space: Gaussian Mixtures and Independent Component Analysis. The proposed methods have been tested with simulated data and with real data in some selected biomedical applications. It is demonstrate that better estimates of the Laplacian matrix can be obtained with the new proposed methods in cases where Gaussianity is not a correct assumption.
The problem of generating synthetic signal in scenarios where real signals scarcity can be an issue has also been considered. Graph models allow more general pairwise dependence models between signal samples. Thus a new method based on the Complex Graph Fourier Transform and on the concept of subrogation is proposed. It has been applied in the challenging problem of hand gesture recognition. It has been demonstrated that extending the original training set with graph surrogate replicas, significantly improves the accuracy of the hand gesture classifier. / Belda Valls, J. (2022). Nuevas contribuciones a la teoría y aplicación del procesado de señal sobre grafos [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/191333
|
838 |
The Application of Differential Synthetic Aperture Radar Interferometry Dataset for Validation, Characterization and Flood Risk Analysis in Land Subsidence-Affected AreasNavarro-Hernández, María I. 02 July 2024 (has links)
This interdisciplinary doctoral dissertation addresses land subsidence in different and diverse study cases in the world, employing advanced techniques and methodologies to measure their magnitude and comprehensively explore its causes, and implications. Investigating areas such as the San Luis Potosi metropolitan area, Alaşehir-Sarıgöl sub-basin (ASSB) in Türkiye, and the Alto Guadalentín Valley in Spain, the research unveils critical insights into the complex dynamics of subsidence phenomena. Utilizing advanced remote sensing techniques like Persistent Scatterer Interferometry (PSI) and Coherent Pixels Technique (CPT), the study assesses subsidence rates and correlates them with factors such as trace faults, groundwater extraction, and soft soil thickness. Validation methodologies were developed and proposed to the scientific community on the first stage, integrating Global Navigation Satellite System (GNSS) benchmarks, enhance the reliability of Differential Synthetic Aperture Radar Interferometry (DInSAR) measurements, ensuring a robust foundation for subsequent analyses. The research aims to contribute to the understanding of land subsidence and contribute to create a decision-support framework to mitigate the phenomenon while addressing specific research objectives within each identified topic of inquiry. The research topic 1 includes the “DInSAR for monitoring land subsidence in overexploited aquifers”. In the San Luis Potosi metropolitan area (Mexico), the application of CPT technique reveals intriguing correlations between trace faults, land subsidence, and groundwater extraction. Specifically, areas in the municipality of Soledad de Graciano Sánchez exhibit subsidence values ranging between -1.5 and -3.5 cm/year, while in San Luis Potosi, values range from -1.8 to -4.2 cm/year. The validation of CPT results against five Global Navigation Satellite System (GNSS) benchmarks establishes a robust correlation of 0.986, underlining the reliability of InSAR-derived deformations. Additionally, in regions like the Alaşehir-Sarıgöl sub-basin (Türkiye), where water stress is heightened due to intensive agricultural irrigation, the study explores the roles of tectonic activity and groundwater withdrawal in land subsidence. Utilizing the P-SBAS algorithm, 98 Sentinel-1 SAR images in ascending orbits and 123 in descending orbits were analysed, covering the period from 2016 to 2020. Independent Component Analysis was applied to distinguish long-term displacements from seasonal variations in the DInSAR time series data. Displacement rates of up to -6.40 cm/year were identified, thus, the proposed P-SBAS algorithm facilitates the monitoring of displacement, revealing direct correlations between DInSAR displacement and critical factors like aquitard layer compaction. These findings contribute valuable insights into the dynamic interactions shaping overexploited aquifers. The research topic 2, developing parallelly to topic 1, consists of the “Validation of DInSAR data applied to land subsidence areas”. Addressing the imperative for validation methodologies in subsidence assessments, a systematic approach introduces statistical analyses and classification schemes. This methodology is designed to validate and refine DInSAR data, enhancing the reliability of subsidence assessments. By normalizing Root Mean Square Error (RMSE) parameters with the range and average of in-situ deformation values and employing the squared Pearson correlation coefficient (R²), a classification scheme is established. This scheme facilitates the acceptance/rejection of DInSAR data for further analyses through the application of automatic analysis supported by a Matlab © code, ensuring a more accurate representation of land subsidence phenomena. The research topic 3 covers the exploitation of DInSAR data for assessing flooding potential and determining characteristic parameters of aquifer systems. The first one is “Impact of land subsidence on flood patterns”. The study in the Alto Guadalentín Valley, a region experiencing extreme flash floods jointly with high-magnitude land subsidence, integrates flood event models, Differential interferometric SAR (DInSAR) techniques, and 2D hydraulic flow models. Through Synthetic Aperture Radar (SAR) satellite images and DInSAR, land subsidence's magnitude and spatial distribution are quantified. The results demonstrate significant changes in water surface elevation between the two 1992 and 2016 temporal scenarios, leading to a 2.04 km² increase in areas with water depths exceeding 0.7 m. These outcomes, incorporated into a flood risk map and economic flood risk assessment, underscore the pivotal role of land subsidence in determining inundation risk and its socio-economical implications. The research offers a valuable framework for enhancing flood modelling by considering the intricate dynamics of land subsidence. The second application of DInSAR data is about the “Automatic calculation of skeletal storage coefficients in aquifer systems”. In response to the need for automating data analysis for specific storage coefficients in aquifer systems, a MATLAB© application is introduced. This application streamlines the correlation between piezometric levels and ground deformation, significantly reducing analysis time and mitigating potential human interpretation errors. The developed application integrates temporal groundwater level series from observation wells and ground deformation data measured by in-situ or remote sensing techniques (e.g., DInSAR). Through the automatic construction of stress-strain curves, the application contributes to the estimation of skeletal storage coefficients, offering a valuable tool for evaluating aquifer system behaviours. This comprehensive research, guided by the complexities of these three distinct research topics, yields detailed insights and methodological advancements. By integrating diverse datasets and employing advanced techniques, this dissertation offers a multidimensional understanding of land subsidence dynamics and provides a robust foundation for sustainable groundwater management globally. / This research is funded by the PRIMA Programme supported by the European Union (Grant agreement 1924), project RESERVOIR.
|
839 |
A Novel Approach to Enhancing Multi-Modal Facial Recognition: Integrating Convolutional Neural Networks, Principal Component Analysis, and Sequential Neural NetworksAbdul-Al, Mohamed, Kyeremeh, George Kumi, Qahwaji, Rami, Ali, N., Abd-Alhameed, Raed 16 September 2024 (has links)
Yes / Facial recognition technology is crucial for precise and rapid identity verification and security. This research delves into advancements in facial recognition technology for verification purposes, employing a combination of convolutional neural networks (CNNs), principal component analysis (PCA), and sequential neural networks. Unlike identification, our focus is on verifying an individual's identity, that is a critical distinction in the context of security applications. Our goal is to enhance the efficacy and precision of face verification using several imaging modalities, including thermal, infrared, visible light, and a combination of visible and infrared. We use the pre-trained VGG16 model on the ImageNet dataset to extract features. Feature extraction is performed using the pre-trained VGG16 model on the ImageNet dataset, complemented by PCA for dimensionality reduction. We introduce a novel method, termed VGG16-PCA-NN, aimed at improving the precision of facial authentication. This method is validated using the Sejong Face Database, with a 70% training, 15% testing, and 15% validation split. While demonstrating a remarkable approaching 100% accuracy rate across visual and thermal modalities and a combined visible-infrared modality, it is crucial to note that these results are specific to our dataset and methodology. A comparison with existing approaches highlights the innovative aspect of our work, though variations in datasets and evaluation metrics necessitate cautious interpretation of comparative performance. Our study makes significant contributions to the biometrics and security fields by developing a robust and efficient facial authentication method. This method is designed to overcome challenges posed by environmental variations and physical obstructions, thereby enhancing reliability and performance in diverse conditions. The realised accuracy rates that the approach achieves across a variety of modalities demonstrate its promise for applications that use multi-modal data. This opens the door for the creation of biometric identification systems that are more trustworthy and secure. It is intended that the technology will be used in real-time settings for which the new modalities can be integrated in different situations.
|
840 |
[en] INTELLIGENT WELL TRANSIENT TEMPERATURE SIGNAL RECONSTRUCTION / [pt] RECONSTRUÇÃO DE SINAIS TRANSIENTES DE TEMPERATURA EM POÇOS INTELIGENTESMANOEL FELICIANO DA SILVA JUNIOR 10 November 2021 (has links)
[pt] A tecnologia de poços inteligentes já possui muitos anos de experiência de campo. Inúmeras publicações tem descrito como o controle de fluxo remoto e os sistemas de monitoração podem diminuir o número de intervenções, o número de poços e aumentar a eficiência do gerenciamento de reservatórios. Apesar da
maturidade dos equipamentos de completação o conceito de poço inteligente integrado como um elemento chave do Digital Oil Field ainda não está completmente desenvolvido. Sistemas permanentes de monitoração nesse contexto tem um papel fundamental como fonte da informação a respeito do
sistema de produção real visando calibração de modelos e minimização de incerteza. Entretanto, cada sensor adicional representa aumento de complexidade e de risco operacional. Um entendimento fundamentado do que realmente é necessário, dos tipos de sensores aplicáveis e quais técnicas de análises estão disponíveis para extrair as informações necessárias são pontos chave para o sucesso do projeto de um poço inteligente. Este trabalho propõe uma nova forma de tratar os dados em tempo real de poços inteligentes através da centralização do pré-processamento dos dados. Um modelo poço inteligente numérico para temperatura em regime transiente foi desenvolvido, testado e validado com a
intenção de gerar dados sintéticos. A aplicação foi escolhida sem perda de generalidade como um exemplo representativo para validação dos algorítmos de limpeza e extração de características desenvolvidos. Os resultados mostraram aumento da eficiência quando comparados com o estado da arte e um potencial
para capturar a influência mútua entre os processos de produção. / [en] Intelligent Well (IW) technology has built-up several years production experience. Numerous publications have described how remote flow control and monitoring capabilities can lead to fewer interventions, a reduced well count and improved reservoir management. Despite the maturity of IW equipment, the
concept of the integrated IW as a key element in the Digital Oil Field still not fully developed. Permanent monitoring systems in this framework play an important role as source of the necessary information about actual production system aiming model calibration and uncertainty minimization. However, each
extra permanently installed sensor increases the well s installation complexity and operational risk. A well-founded understanding of what data is actually needed and what analysis techniques are available to extract the required information are key factors for the success of the IW project. This work proposes a new framework to real-time data analysis through centralizing pre-processing. A numeric IW transient temperature model is developed, tested and validated to generate synthetic data. It was chosen without loss off generality as a representative application to test and validate the cleansing and feature
extraction algorithms developed. The results achieved are compared with the state of the art ones showing advantages regarding efficiency and potential to capture mutual influence among processes.
|
Page generated in 0.0712 seconds