• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 315
  • 151
  • 35
  • 32
  • 25
  • 20
  • 19
  • 16
  • 14
  • 14
  • 7
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 784
  • 784
  • 755
  • 141
  • 129
  • 122
  • 108
  • 92
  • 77
  • 72
  • 69
  • 58
  • 56
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Financial Time Series Analysis using Pattern Recognition Methods

Zeng, Zhanggui January 2008 (has links)
Doctor of Philosophy / This thesis is based on research on financial time series analysis using pattern recognition methods. The first part of this research focuses on univariate time series analysis using different pattern recognition methods. First, probabilities of basic patterns are used to represent the features of a section of time series. This feature can remove noise from the time series by statistical probability. It is experimentally proven that this feature is successful for pattern repeated time series. Second, a multiscale Gaussian gravity as a pattern relationship measurement which can describe the direction of the pattern relationship is introduced to pattern clustering. By searching for the Gaussian-gravity-guided nearest neighbour of each pattern, this clustering method can easily determine the boundaries of the clusters. Third, a method that unsupervised pattern classification can be transformed into multiscale supervised pattern classification by multiscale supervisory time series or multiscale filtered time series is presented. The second part of this research focuses on multivariate time series analysis using pattern recognition. A systematic method is proposed to find the independent variables of a group of share prices by time series clustering, principal component analysis, independent component analysis, and object recognition. The number of dependent variables is reduced and the multivariate time series analysis is simplified by time series clustering and principal component analysis. Independent component analysis aims to find the ideal independent variables of the group of shares. Object recognition is expected to recognize those independent variables which are similar to the independent components. This method provides a new clue to understanding the stock market and to modelling a large time series database.
72

Financial Time Series Analysis using Pattern Recognition Methods

Zeng, Zhanggui January 2008 (has links)
Doctor of Philosophy / This thesis is based on research on financial time series analysis using pattern recognition methods. The first part of this research focuses on univariate time series analysis using different pattern recognition methods. First, probabilities of basic patterns are used to represent the features of a section of time series. This feature can remove noise from the time series by statistical probability. It is experimentally proven that this feature is successful for pattern repeated time series. Second, a multiscale Gaussian gravity as a pattern relationship measurement which can describe the direction of the pattern relationship is introduced to pattern clustering. By searching for the Gaussian-gravity-guided nearest neighbour of each pattern, this clustering method can easily determine the boundaries of the clusters. Third, a method that unsupervised pattern classification can be transformed into multiscale supervised pattern classification by multiscale supervisory time series or multiscale filtered time series is presented. The second part of this research focuses on multivariate time series analysis using pattern recognition. A systematic method is proposed to find the independent variables of a group of share prices by time series clustering, principal component analysis, independent component analysis, and object recognition. The number of dependent variables is reduced and the multivariate time series analysis is simplified by time series clustering and principal component analysis. Independent component analysis aims to find the ideal independent variables of the group of shares. Object recognition is expected to recognize those independent variables which are similar to the independent components. This method provides a new clue to understanding the stock market and to modelling a large time series database.
73

Genetic and biological architecture of pork quality, carcass, primal-cut and growth traits in Duroc pigs

Hannah E Willson (9187739) 01 August 2020 (has links)
<p>Within the last few decades, swine breeding programs have been refined to include pork quality and novel carcass traits alongside growth, feed efficiency, and carcass leanness in the selection programs for terminal sire lines with a goal to produce high quality and efficient pork product for consumers. In order to accurately select for multiple traits at once, it becomes imperative to explore their genetic and biological architecture. The genetic architecture of traits can be explored through the estimation of genetic parameters, genome-wide association studies (GWAS), gene networks and metabolic pathways. An alternative approach to explore the genetic and biological connection between traits is based on principal component analysis (PCA), which generates novel “pseudo-phenotypes” and biological types (biotypes). In this context, the main objective of this thesis was to understand the genetic and biological relationship between three growth, eight conventional carcass, 10 pork quality, and 18 novel carcass traits included in two studies. The phenotypic data set included 2,583 records from female Duroc pigs from a terminal sire line. The pedigree file contained 193,764 animals and the genotype file included 21,344 animals with 35,651 single nucleotide polymorphisms (SNPs). The results of the first study indicate that genetic progress can be achieved for all 39 traits. In general, the heritability estimates were moderate, while most genetic correlations were generally moderate to high and favorable. Some antagonisms were observed but those genetic correlations were low to moderate in nature. Thus, these relationships can be considered when developing selection indexes. The second study showed that there are strong links between traits through their principal components (PCs). The main PCs identified are linked to biotypes related to growth, muscle and fat deposition, pork color, and body composition. The PCs were also used as pseudo-phenotypes in the GWAS analysis, which identified important candidate genes and metabolic pathways linked to each biotype. All of this evidence links valuable variables such as belly, color, marbling, and leanness traits. Our findings greatly contribute to the optimization of genetic and genomic selection for the inclusion of valuable and novel traits to improve productive efficiency, novel carcass, and meat quality traits in terminal sire lines.<br></p><p></p>
74

A Framework For Analysing Investable Risk Premia Strategies / Ett ramverk för analys av investerbarariskpremiestrategier

Sandqvist, Joakim, Byström, Erik January 2014 (has links)
The focus of this study is to map, classify and analyse how different risk premia strategies that are fully implementable, perform and are affected by different economic environments. The results are of interest for practitioners who currently invest in or are thinking about investing in risk premia strategies. The study also makes a theoretical contribution since there currently is a lack of publicised work on this subject. A combination of the statistical methods cluster tree, spanning tree and principal component analysis are used to first categorise the investigated risk premia strategies into different clusters based on their correlation characteristics and secondly to find the strategies’ most important return drivers. Lastly, an analysis of how the clusters of strategies perform in different macroeconomic environments, here represented by inflation and growth, is conducted. The results show that the three most important drivers for the investigated risk premia strategies are a crisis factor, an equity directional factor and an interest rate factor. These three components explained about 18 percent, 14 percent and 10 percent of the variation in the data, respectively. The results also show that all four clusters, despite containing different types of risk premia strategies, experienced positive total returns during all macroeconomic phases sampled in this study. These results can be seen as indicative of a lower macroeconomic sensitivity among the risk premia strategies and more of an “alpha-like” behaviour. / Denna studie fokuserar på att kartlägga, klassificera och analysera hur riskpremie-strategier, som är fullt implementerbara, presterar och påverkas av olika makroekonomiska miljöer. Studiens resultat är av intresse för investerare som antingen redan investerar i riskpremiestrategier eller som funderar på att investera. Studien lämnar även ett teoretiskt bidrag eftersom det i dagsläget finns få publicerade verk som behandlar detta ämne. För att analysera strategierna har en kombination av de statistiska metoderna cluster tree, spanning  tree  och  principal  component  analysis  använts.  Detta  för  att  dels  kategorisera riskpremie-strategierna i olika kluster, baserat på deras inbördes korrelation, men också för att finna de faktorer som driver riskpremiestrategiernas avkastning. Slutligen har också en analys över hur de olika strategierna presterar under olika makroekonomiska miljöer genomförts där de makroekonomiska miljöerna representeras av inflation- och tillväxtindikatorer. Resultaten  visar  att  de  tre  viktigaste  faktorerna  som  driver  riskpremiestrategiernas avkastning  är  en  krisfaktor,  en  aktiemarknadsfaktor och  en  räntefaktor.  Dessa  tre  faktorer förklarar ungefär 18 procent, 14 procent och 10 procent av den undersökta datans totala varians. Resultaten  visar  också  att  alla  fyra  kluster,  trots  att  de  innehåller  olika  typer  av riskpremiestrategier,  genererade  positiv  avkastning  under  alla  makroekonmiska  faser  som studerades. Detta resultat ses som ett tecken på en lägre makroekonomisk känslighet bland riskpremiestrategier och mer av ett alfabeteende.
75

A Multi-Level Extension of the Hierarchical PCA Framework with Applications to Portfolio Construction with Futures Contracts / En flernivåsutbyggnad av ramverket för Hierarkisk PCA med tillämpningar på portföljallokering med terminskontrakt

Bjelle, Kajsa January 2023 (has links)
With an increasingly globalised market and growing asset universe, estimating the market covariance matrix becomes even more challenging. In recent years, there has been an extensive development of methods aimed at mitigating these issues. This thesis takes its starting point in the recently developed Hierarchical Principal Component Analysis, in which a priori known information is taken into account when modelling the market correlation matrix. However, while showing promising results, the current framework only allows for fairly simple hierarchies with a depth of one. In this thesis, we introduce a generalisation of the framework that allows for an arbitrary hierarchical depth. We also evaluate the method in a risk-based portfolio allocation setting with Futures contracts.  Furthermore, we introduce a shrinkage method called Hierarchical Shrinkage, which uses the hierarchical structure to further regularise the matrix. The proposed models are evaluated with respect to how well-conditioned they are, how well they predict eigenportfolio risk and portfolio performance when they are used to form the Minimum Variance Portfolio. We show that the proposed models result in sparse and easy-to-interpret eigenvector structures, improved risk prediction, lower condition numbers and longer holding periods while achieving Sharpe ratios that are at par with our benchmarks. / Med en allt mer globaliserad marknad och växande tillgångsuniversum blir det alltmer utmanande att uppskatta marknadskovariansmatrisen. Under senare år har det skett en omfattande utveckling av metoder som syftar till att mildra dessa problem. Detta examensarbete tar sin utgångspunkt i det nyligen utvecklade ramverket Hierarkisk Principalkomponentanalys, där kunskap känd sedan innan används för att modellera marknadskorrelationerna. Även om det visar lovande resultat så tillåter det nuvarande ramverket endast enkla hierarkier med ett djup på ett. I detta examensarbete introduceras en generalisering av detta ramverk, som tillåter ett godtyckligt hierarkiskt djup. Vi utvärderar också metoden i en riskbaserad portföljallokeringsmiljö med terminskontrakt.  Vidare introducerar vi en krympningsmetod som vi kallar Hierarkisk Krympning. Hierarkisk krympning använder den hierarkiska strukturen för att ytterligare regularisera matrisen. De föreslagna modellerna av korrelationsmatrisen utvärderas med avseende på hur välkonditionerade de är, hur väl de förutsäger egenportföljrisk samt hur de presterar i portföljallokeringssyfte i en Minimum Variance portfölj. Vi visar att de introducerade modellerna resulterar i en gles och lätttolkad egenvektorstruktur, förbättrad riskprediktion, lägre konditionstal och längre hållperiod, samtidigt som portföljerna uppnår Sharpe-kvoter i linje med benchmarkmodellerna.
76

Understanding particulate matter - Material analyses of real-life diesel particulate filters and correlation to vehicles’ operational data / Att förstå partiklar - Analyser av verkliga dieselpartikelfilter och korrelationer till fordonsdriftparametrar

Nordin, Linus January 2021 (has links)
Syftet med denna studie var att undersöka effekterna av driftsparametrar på ett antal mätbara askrelaterade parametrar i dieselpartikelfilter (DPF) i tunga fordon. Tidigare studier visar att askans packningsdensitet, askflöde och hur askan fördelas inuti ett DPF är beroende av parametrar som temperatur, avgasflöde och oljeförbrukning ett fordon har. Det finns anledning att tro att dessa parametrar också påverkas av hur ett fordon används, varför olika driftsparametrar analyserades för korrelation med de uppmätta askparametrarna. De driftsparametrar som undersöktes i denna studie var medelhastighet, antal stopp per 100 km, tomgångsprocent och bränsleförbrukning. Studien startade med metodutveckling av mätning av askvikter hos DPF och jämförde tre olika metoder, benämnda I, II och III. Metod II, som innebar att väga en bit av ett filter före och efter rengöring av filterstycket från aska med tryckluft valdes som den mest pålitliga och användbara metoden eftersom den var snabbare, behövde mindre av varje DPF för att ge kompletta resultat och kunde användas vid analys av DPF-prover som inte hade undersökts innan de användes i ett fordon. Askvikten, tillsammans med den volymetriska fyllningsgraden och genom att känna till inloppsvolymen för ett DPF användes för att beräkna askans packningsdensitet. Fyllningsgraden och askfördelningsprofilen mättes med bildanalys av mikroskopbilder av sågade tvärsnitt av filterstycket. Korrelationsstudien utfördes sedan med dessa metoder och korrelerades med operativa data extraherade från databaser på Scania CV. För att studera vilka parametrar som var korrelerade till varandra utfördes en principal component analysis (PCA) med de operativa och uppmätta variablerna som en matris av data. PCA-analysen visade att tre primalkomponenter (PC) utgjorde &gt;90% av variationen i de erhållna data och att plug/wall-förhållandet, som är ett numeriskt värde för askfördelningen, var starkt positivt korrelerat med ett fordons medelhastighet och negativt korrelerat med antalet stopp, tomgångsprocent och bränsleförbrukning. Vidare visade askflödet en svagare positiv korrelation med tomgångsprocent, antal stopp och bränsleförbrukning medan oljeförbrukningen visade en ännu lägre korrelation med dessa parametrar. Detta indikerar att oljeförbrukningen ej skall ses som en konstant proportionell andel av bränsleförbrukningen för samtliga fordon vid beräkning av serviceintervall för DPFer. Askans packningsdensitet visade ingen till mycket låg korrelation med andra variabler i studien vilket kan bero på att proverna med hög andel väggaska har använts betydligt kortare sträcka än övriga prover, vilket kan ha gjort så att askan inte hunnits packas hårt i filterkanalerna. / The purpose of this study was to investigate the impact of operational parameters on a number of measurable ash related numbers within diesel particle filters (DPFs) of heavy duty vehicles. Previous studies show that ash packing density, ash flow and how the ash is distributed inside a DPF is dependent on parameters such as temperature, exhaust flow profiles and how much oil a vehicle consumes. There is reason to believe that these parameters are also affected by how a vehicle is operated which is why different operational parameters were analysed for correlation with the measured ash numbers. The operational parameters that was investigated in this study was average speed, number of stops per 100 km, idling percentage and fuel consumption. The study started with method development of measuring ash weights of DPFs and compared three different methods, named I, II and III. Method II, which relies on weighing a piece of a filter substrate before and after cleaning the filter piece from ash with pressurized air was chosen as the most reliable and useful method as it was faster, needed less of each DPF to complete the analysis and could be used when analysing DPF samples that had not been investigated previous to its use in a vehicle. The ash weight, together with the volumetric filling degree and known inlet volume of the DPF was used to calculate the ash packing density. The filling degree and ash distribution profile was measured with an image analysis of microscope images of sawed cross sections of the filter piece. The correlation study was then performed with these methods and correlated with operational data extracted from databases at Scania CV. To study which parameters were correlated to each other a primal component analysis (PCA) was performed with the operational and measured variables as a matrix of data. The PCA analysis showed that three primal components made up &gt;90 % of variation in the data and that plug/wall ratio, which is a numerical value of the ash distribution, was strongly positively correlated with average speed of a vehicle and negatively correlated with number of stops, idling percentage and fuel consumption. Furthermore, ash flow showed a slight positive correlation with idling percentage, number of stops and fuel consumption while oil consumption showed an even slighter correlation with these parameters. This indicates that the oil consumption cannot be taken as a constant value as percentage of fuel consumption when calculating service intervals of DPFs. The ash packing density showed none to very low correlation with any other variables in the study, which could depend on the fact that the DPFs with high percentage of wall ash had a significantly lower runtime which could mean that the ash has not had time to be packed tightly in the filter channels.
77

Daily pattern recognition of dynamic origin-destination matrices using clustering and kernel principal component analysis / Daglig mönsterigenkänning av dynamiska Origin-Destination-matriser med hjälp av clustering och kernel principal component analysis

Dong, Zhiwu January 2021 (has links)
Origin-Destination (OD) matrix plays an important role in traffic management and urban planning. However, the OD estimation demands large data collection which has been done in past mostly by surveys with numerous limitations. With the development of communication technology and artificial intelligence technology, the transportation industry experiences new opportunities and challenges. Sensors bring big data characterized by 4V (Volume, Variety, Velocity, Value) to the transportation domain. This allows traffic practitioners to receive data covering large-scale areas and long time periods, even several years of data. At the same time, the introduction of artificial intelligence technology provides new opportunities and challenges in processing massive data. Advances from computer science have also brought revolutionary advancements in the field of transportation. All these new advances and technologies enable large data collection that can be used for extracting and estimating dynamic OD matrices for small time intervals and long time periods.Using Stockholm as the focus of the case study, this thesis estimates dynamic OD matrices covering data collected from the tolls located around Stockholm municipality. These dynamic OD matrices are used to analyze the day-to-day characteristics of the traffic flow that goes through Stockholm. In other words, the typical day-types of traffic through the city center are identified and studied in this work. This study analyzes the data collected by 58 sensors around Stockholm containing nearly 100 million vehicle observations (12GB).Furthermore, we consider and study the effects of dimensionality reduction on the revealing of most common day-types by clustering. The considered dimensionality reduction techniques are Principal Component Analysis (PCA) and its variant Kernel PCA (KPCA). The results reveal that dimensionality reduction significantly drops computational costs while resulting in reasonable day-types. Day-type clusters reveal expected as unexpected patterns and thus could have potential in traffic management, urban planning, and designing the strategy for congestion tax. / Origin-Destination (OD) -matrisen spelar en viktig roll i trafikledning och stadsplanering. Emellertid kräver OD-uppskattningen stor datainsamling, vilket har gjorts tidigare mest genom enkäter med många begränsningar. Med utvecklingen av kommunikationsteknik och artificiell intelligens upplever transportindustrin nya möjligheter och utmaningar. Sensorer ger stor data som kännetecknas av 4V (på engelska, volym, variation, hastighet, värde) till transportdomänen. Detta gör det möjligt för trafikutövare att ta emot data som täcker storskaliga områden och långa tidsperioder, till och med flera års data. Samtidigt ger introduktionen av artificiell intelligens teknik nya möjligheter och utmaningar i behandlingen av massiva data. Datavetenskapens framsteg har också lett till revolutionära framsteg inom transportområdet. Alla dessa nya framsteg och tekniker möjliggör stor datainsamling som kan användas för att extrahera och uppskatta dynamiska OD-matriser under små tidsintervall och långa tidsperioder.Genom att använda Stockholm som fokus för fallstudien uppskattar denna avhandling dynamiska OD-matriser som täcker data som samlats in från vägtullarna runt Stockholms kommun. Dessa dynamiska OD-matriser används för att analysera de dagliga egenskaperna hos trafikflödet i Stockholm genom stadens centrum. Med andra ord känns igen och studeras de typiska dagtyperna av trafik genom stadens centrum i detta arbete. Denna studie analyserar data som samlats in av 58 sensorer runt Stockholm som innehåller nästan 100 miljoner fordonsobservationer (12 GB)Dessutom överväger och studerar vi effekterna av dimensioneringsreduktion på avslöjandet av de vanligaste dagtyperna genom kluster. De betraktade dimensioneringsreduktionsteknikerna är Principal Component Analysis (PCA) och dess variant Kernel PCA (KPCA). Resultaten avslöjar att dimensioneringsreduktion avsevärt minskar beräkningskostnaderna, samtidigt som det ger rimliga dagtyper. Dagstyp kluster avslöjar förväntade som oväntade mönster och därmed kan ha potential i trafikledning, stadsplanering och utformning av strategin för trängselskatt.
78

Sparse Principal Component Analysis for High-Dimensional Data: A Comparative Study

Bonner, Ashley J. 10 1900 (has links)
<p><strong>Background:</strong> Through unprecedented advances in technology, high-dimensional datasets have exploded into many fields of observational research. For example, it is now common to expect thousands or millions of genetic variables (p) with only a limited number of study participants (n). Determining the important features proves statistically difficult, as multivariate analysis techniques become flooded and mathematically insufficient when n < p. Principal Component Analysis (PCA) is a commonly used multivariate method for dimension reduction and data visualization but suffers from these issues. A collection of Sparse PCA methods have been proposed to counter these flaws but have not been tested in comparative detail. <strong>Methods:</strong> Performances of three Sparse PCA methods were evaluated through simulations. Data was generated for 56 different data-structures, ranging p, the number of underlying groups and the variance structure within them. Estimation and interpretability of the principal components (PCs) were rigorously tested. Sparse PCA methods were also applied to a real gene expression dataset. <strong>Results:</strong> All Sparse PCA methods showed improvements upon classical PCA. Some methods were best at obtaining an accurate leading PC only, whereas others were better for subsequent PCs. There exist different optimal choices of Sparse PCA methods when ranging within-group correlation and across-group variances; thankfully, one method repeatedly worked well under the most difficult scenarios. When applying methods to real data, concise groups of gene expressions were detected with the most sparse methods. <strong>Conclusions:</strong> Sparse PCA methods provide a new insightful way to detect important features amidst complex high-dimension data.</p> / Master of Science (MSc)
79

Analysis of pavement condition data employing Principal Component Analysis and sensor fusion techniques

Rajan, Krithika January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Dwight D. Day / Balasubramaniam Natarajan / This thesis presents an automated pavement crack detection and classification system via image processing and pattern recognition algorithms. Pavement crack detection is important to the Departments of Transportation around the country as it is directly related to maintenance of pavement quality. Manual inspection and analysis of pavement distress is the prevalent method for monitoring pavement quality. However, inspecting miles of highway sections and analyzing each is a cumbersome and time consuming process. Hence, there has been research into automating the system of crack detection. In this thesis, an automated crack detection and classification algorithm is presented. The algorithm is built around the statistical tool of Principal Component Analysis (PCA). The application of PCA on images yields the primary features of cracks based on which, cracked images are distinguished from non-cracked ones. The algorithm consists of three levels of classification: a) pixel-level b) subimage (32 X 32 pixels) level and c) image level. Initially, at the lowermost level, pixels are classified as cracked/non-cracked using adaptive thresholding. Then the classified pixels are grouped into subimages, for reducing processing complexity. Following the grouping process, the classification of subimages is validated based on the decision of a Bayes classifier. Finally, image level classification is performed based on a subimage profile generated for the image. Following this stage, the cracks are further classified as sealed/unsealed depending on the number of sealed and unsealed subimages. This classification is based on the Fourier transform of each subimage. The proposed algorithm detects cracks aligned in longitudinal as well as transverse directions with respect to the wheel path with high accuracy. The algorithm can also be extended to detect block cracks, which comprise of a pattern of cracks in both alignments.
80

Investigation of the elemental profiles of Hypericum perforatum as used in herbal remedies

Owen, Jade Denise January 2014 (has links)
The work presented in this thesis has demonstrated that the use of elemental profiles for the quality control of herbal medicines can be applied to multiple stages of processing. A single method was developed for the elemental analysis of a variety of St John’s Wort (Hypericum perforatum) preparations using Inductively Coupled Plasma – Optical Emission Spectroscopy (ICP-OES). The optimised method consisted of using 5 ml of nitric acid and microwave digestion reaching temperatures of 185⁰C. Using NIST Polish tea (NIST INCT-TL- 1) the method was found to be accurate and the matrix effect from selected St John’s Wort (SJW) preparations was found to be ≤22%. The optimised method was then used to determine the elemental profiles for a larger number of SJW preparations (raw herbs=22, tablets=20 and capsules=12). Specifically, the method was used to determine the typical concentrations of 25 elements (Al, As, B, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, Hg, In, Mg, Mn, Mo, Ni, Pb, Pt, Sb, Se, Sr, V, Y and Zn) for each form of SJW which ranged from not detected to 200 mg/g. To further interpret the element profiles, Principal Component Analysis (PCA) was carried out. This showed that different forms of SJW could be differentiated based on their elemental profile and the SJW ingredient used (i.e. extract or raw herb) identified. The differences in the profiles were likely due to two factors: (1) the addition of bulking agents and (2) solvent extraction. In order to further understand how the elemental profile changes when producing the extract from the raw plant, eight SJW herb samples were extracted with four solvents (100% water, 60% ethanol, 80% ethanol and 100% ethanol) and analysed for their element content. The results showed that the transfer of elements from the raw herb to an extract was solvent and metal dependent. Generally the highest concentrations of an element were extracted with 100% water, which decreased as the concentration of ethanol increased. However, the transfer efficiency for the element Cu was highest with 60% ethanol. The solvents utilised in industry (60% and 80% ethanol) were found to preconcentrate some elements; Cu (+119%), Mg (+93%), Ni (+183%) and Zn (+12%) were found to preconcentrate in 60 %v/v ethanol extracts and Cu (+5%) and Ni (+30%). PCA of the elemental profiles of the four types of extract showed that differentiation was observed between the different solvents and as the ethanol concentration increased, the extracts became more standardised. Analysis of the bioactive compounds rutin, hyperoside, quercetin, hyperforin and adhyperforin followed by subsequent Correlation Analysis (CA) displayed relationships between the elemental profiles and the molecular profiles. For example strong correlations were seen between hyperoside and Cr as well as Quercetin and Fe. This shows potential for tuning elemental extractions for metal-bioactive compounds for increased bioactivity and bioavailability; however further work in needed in this area.

Page generated in 0.0961 seconds