• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 385
  • 168
  • 46
  • 44
  • 28
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 940
  • 940
  • 742
  • 149
  • 146
  • 142
  • 124
  • 113
  • 97
  • 86
  • 75
  • 72
  • 70
  • 63
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Detekce bdělosti mozku ze skalpového EEG záznamu za pomoci vyšších statistických metod / Dectection of brain wakefulness from scalp EEG data with higher order statistics

Semeráková, Nikola January 2018 (has links)
Presented master's thesis deals with detection of brain wakefulness from scalp EEG data with higher order statistics. Part of the thesis is a description of electroencephalography, from the method of signal generation, sensing, electroencephraphy, EEG signal artifacts, frequency bands of EEG signal to its possible processing. Furthermore, the concept of mental fatigue and the possibility of its detection in the EEG signal is described. Subsequently, the principles of higher statistical methods of PCA and ICA and the specific possibilities of decomposition of EEG signal are described using these methods, from which the method of group spatial-frequency ICA was chosen as a suitable method for selection of partial oscillatory sources in EEG signal. In the next part there is described a method of acquisition of data, a the suggestion of solution with selected method and a description of the implemented algorithm, that was applied to real 256-lead scalp EEG data captured during a block task focused on subject allertnes. The absolute and relative power of the EEG signal was decomposed. From the achieved results, we observe that the fluctuations of the spatial frequency patterns of relative power (especially for theta and alpha bands) significantly more closely correspond with the change of reaction time and the error of the subjects performing the task. These observations appear to be relatively consistent with previously published literature, and the current study shows that spatial frequency ICA is able to blindly isolate space-frequency patterns whose fluctuations are statistically significantly correlated with parameters (reaction time, error rate) directly flowing from the given task.
82

A Framework For Analysing Investable Risk Premia Strategies / Ett ramverk för analys av investerbarariskpremiestrategier

Sandqvist, Joakim, Byström, Erik January 2014 (has links)
The focus of this study is to map, classify and analyse how different risk premia strategies that are fully implementable, perform and are affected by different economic environments. The results are of interest for practitioners who currently invest in or are thinking about investing in risk premia strategies. The study also makes a theoretical contribution since there currently is a lack of publicised work on this subject. A combination of the statistical methods cluster tree, spanning tree and principal component analysis are used to first categorise the investigated risk premia strategies into different clusters based on their correlation characteristics and secondly to find the strategies’ most important return drivers. Lastly, an analysis of how the clusters of strategies perform in different macroeconomic environments, here represented by inflation and growth, is conducted. The results show that the three most important drivers for the investigated risk premia strategies are a crisis factor, an equity directional factor and an interest rate factor. These three components explained about 18 percent, 14 percent and 10 percent of the variation in the data, respectively. The results also show that all four clusters, despite containing different types of risk premia strategies, experienced positive total returns during all macroeconomic phases sampled in this study. These results can be seen as indicative of a lower macroeconomic sensitivity among the risk premia strategies and more of an “alpha-like” behaviour. / Denna studie fokuserar på att kartlägga, klassificera och analysera hur riskpremie-strategier, som är fullt implementerbara, presterar och påverkas av olika makroekonomiska miljöer. Studiens resultat är av intresse för investerare som antingen redan investerar i riskpremiestrategier eller som funderar på att investera. Studien lämnar även ett teoretiskt bidrag eftersom det i dagsläget finns få publicerade verk som behandlar detta ämne. För att analysera strategierna har en kombination av de statistiska metoderna cluster tree, spanning  tree  och  principal  component  analysis  använts.  Detta  för  att  dels  kategorisera riskpremie-strategierna i olika kluster, baserat på deras inbördes korrelation, men också för att finna de faktorer som driver riskpremiestrategiernas avkastning. Slutligen har också en analys över hur de olika strategierna presterar under olika makroekonomiska miljöer genomförts där de makroekonomiska miljöerna representeras av inflation- och tillväxtindikatorer. Resultaten  visar  att  de  tre  viktigaste  faktorerna  som  driver  riskpremiestrategiernas avkastning  är  en  krisfaktor,  en  aktiemarknadsfaktor och  en  räntefaktor.  Dessa  tre  faktorer förklarar ungefär 18 procent, 14 procent och 10 procent av den undersökta datans totala varians. Resultaten  visar  också  att  alla  fyra  kluster,  trots  att  de  innehåller  olika  typer  av riskpremiestrategier,  genererade  positiv  avkastning  under  alla  makroekonmiska  faser  som studerades. Detta resultat ses som ett tecken på en lägre makroekonomisk känslighet bland riskpremiestrategier och mer av ett alfabeteende.
83

Data Mining the Effects of Storage Conditions, Testing Conditions, and Specimen Properties on Brain Biomechanics

Crawford, Folly Martha Dzan 10 August 2018 (has links)
Traumatic brain injury is highly prevalent in the United States yet there is little understanding of how the brain responds during injurious loading. A confounding problem is that because testing conditions vary between assessment methods, brain biomechanics cannot be fully understood. Data mining techniques were applied to discover how changes in testing conditions affect the mechanical response of the brain. Data were gathered from literature sources and self-organizing maps were used to conduct a sensitivity analysis to rank considered parameters by importance. Fuzzy C-means clustering was applied to find any data patterns. The rankings and clustering for each data set varied, indicating that the strain rate and type of deformation influence the role of these parameters. Multivariate linear regression was applied to develop a model which can predict the mechanical response from different experimental conditions. Prediction of response depended primarily on strain rate, frequency, brain matter composition, and anatomical region.
84

Blind Acoustic Feedback Cancellation for an AUV

Frick, Hampus January 2023 (has links)
SAAB has developed an autonomous underwater vehicle that can mimic a conventional submarine for military fleets to exercise anti-submarine warfare. The AUV actively emits amplified versions of received sonar pulses to create the illusion of being a larger object. To prevent acoustic feedback, the AUV must distinguish between the sound to be actively responded to and its emitted signal. This master thesis has examined techniques aimed at preventing the AUV from responding to previously emitted signals to avoid acoustical feedback, without relying on prior knowledge of either the received signal or the signal emitted by the AUV. The two primary types of algorithms explored for this problem include blind source separation and adaptive filtering. The adaptive filters based on Leaky Least Mean Square and Kalman have shown promising results in attenuating the active response from the received signal. The adaptive filters utilize the fact that a certain hydrophone primarily receives the active response. This hydrophone serves as an estimate of the active response since the signal it captures is considered unknown and is to be removed. The techniques based on blind source separation have utilized the recordings of three hydrophones placed at various locations of the AUV to separate and estimate the received signal from the one emitted by the AUV. The results have demonstrated that neither of the reviewed methods is suitable for implementation on the AUV. The hydrophones are situated at a considerable distance from each other, resulting in distinct time delays between the reception of the two signals. This is usually referred to as a convolutive mixture. This is commonly solved using the frequency domain to transform the convolutive mixture to an instantaneous mixture. However, the fact that the signals share the same frequency spectrum and are adjacent in time has proven highly challenging.
85

A Multi-Level Extension of the Hierarchical PCA Framework with Applications to Portfolio Construction with Futures Contracts / En flernivåsutbyggnad av ramverket för Hierarkisk PCA med tillämpningar på portföljallokering med terminskontrakt

Bjelle, Kajsa January 2023 (has links)
With an increasingly globalised market and growing asset universe, estimating the market covariance matrix becomes even more challenging. In recent years, there has been an extensive development of methods aimed at mitigating these issues. This thesis takes its starting point in the recently developed Hierarchical Principal Component Analysis, in which a priori known information is taken into account when modelling the market correlation matrix. However, while showing promising results, the current framework only allows for fairly simple hierarchies with a depth of one. In this thesis, we introduce a generalisation of the framework that allows for an arbitrary hierarchical depth. We also evaluate the method in a risk-based portfolio allocation setting with Futures contracts.  Furthermore, we introduce a shrinkage method called Hierarchical Shrinkage, which uses the hierarchical structure to further regularise the matrix. The proposed models are evaluated with respect to how well-conditioned they are, how well they predict eigenportfolio risk and portfolio performance when they are used to form the Minimum Variance Portfolio. We show that the proposed models result in sparse and easy-to-interpret eigenvector structures, improved risk prediction, lower condition numbers and longer holding periods while achieving Sharpe ratios that are at par with our benchmarks. / Med en allt mer globaliserad marknad och växande tillgångsuniversum blir det alltmer utmanande att uppskatta marknadskovariansmatrisen. Under senare år har det skett en omfattande utveckling av metoder som syftar till att mildra dessa problem. Detta examensarbete tar sin utgångspunkt i det nyligen utvecklade ramverket Hierarkisk Principalkomponentanalys, där kunskap känd sedan innan används för att modellera marknadskorrelationerna. Även om det visar lovande resultat så tillåter det nuvarande ramverket endast enkla hierarkier med ett djup på ett. I detta examensarbete introduceras en generalisering av detta ramverk, som tillåter ett godtyckligt hierarkiskt djup. Vi utvärderar också metoden i en riskbaserad portföljallokeringsmiljö med terminskontrakt.  Vidare introducerar vi en krympningsmetod som vi kallar Hierarkisk Krympning. Hierarkisk krympning använder den hierarkiska strukturen för att ytterligare regularisera matrisen. De föreslagna modellerna av korrelationsmatrisen utvärderas med avseende på hur välkonditionerade de är, hur väl de förutsäger egenportföljrisk samt hur de presterar i portföljallokeringssyfte i en Minimum Variance portfölj. Vi visar att de introducerade modellerna resulterar i en gles och lätttolkad egenvektorstruktur, förbättrad riskprediktion, lägre konditionstal och längre hållperiod, samtidigt som portföljerna uppnår Sharpe-kvoter i linje med benchmarkmodellerna.
86

Understanding particulate matter - Material analyses of real-life diesel particulate filters and correlation to vehicles’ operational data / Att förstå partiklar - Analyser av verkliga dieselpartikelfilter och korrelationer till fordonsdriftparametrar

Nordin, Linus January 2021 (has links)
Syftet med denna studie var att undersöka effekterna av driftsparametrar på ett antal mätbara askrelaterade parametrar i dieselpartikelfilter (DPF) i tunga fordon. Tidigare studier visar att askans packningsdensitet, askflöde och hur askan fördelas inuti ett DPF är beroende av parametrar som temperatur, avgasflöde och oljeförbrukning ett fordon har. Det finns anledning att tro att dessa parametrar också påverkas av hur ett fordon används, varför olika driftsparametrar analyserades för korrelation med de uppmätta askparametrarna. De driftsparametrar som undersöktes i denna studie var medelhastighet, antal stopp per 100 km, tomgångsprocent och bränsleförbrukning. Studien startade med metodutveckling av mätning av askvikter hos DPF och jämförde tre olika metoder, benämnda I, II och III. Metod II, som innebar att väga en bit av ett filter före och efter rengöring av filterstycket från aska med tryckluft valdes som den mest pålitliga och användbara metoden eftersom den var snabbare, behövde mindre av varje DPF för att ge kompletta resultat och kunde användas vid analys av DPF-prover som inte hade undersökts innan de användes i ett fordon. Askvikten, tillsammans med den volymetriska fyllningsgraden och genom att känna till inloppsvolymen för ett DPF användes för att beräkna askans packningsdensitet. Fyllningsgraden och askfördelningsprofilen mättes med bildanalys av mikroskopbilder av sågade tvärsnitt av filterstycket. Korrelationsstudien utfördes sedan med dessa metoder och korrelerades med operativa data extraherade från databaser på Scania CV. För att studera vilka parametrar som var korrelerade till varandra utfördes en principal component analysis (PCA) med de operativa och uppmätta variablerna som en matris av data. PCA-analysen visade att tre primalkomponenter (PC) utgjorde >90% av variationen i de erhållna data och att plug/wall-förhållandet, som är ett numeriskt värde för askfördelningen, var starkt positivt korrelerat med ett fordons medelhastighet och negativt korrelerat med antalet stopp, tomgångsprocent och bränsleförbrukning. Vidare visade askflödet en svagare positiv korrelation med tomgångsprocent, antal stopp och bränsleförbrukning medan oljeförbrukningen visade en ännu lägre korrelation med dessa parametrar. Detta indikerar att oljeförbrukningen ej skall ses som en konstant proportionell andel av bränsleförbrukningen för samtliga fordon vid beräkning av serviceintervall för DPFer. Askans packningsdensitet visade ingen till mycket låg korrelation med andra variabler i studien vilket kan bero på att proverna med hög andel väggaska har använts betydligt kortare sträcka än övriga prover, vilket kan ha gjort så att askan inte hunnits packas hårt i filterkanalerna. / The purpose of this study was to investigate the impact of operational parameters on a number of measurable ash related numbers within diesel particle filters (DPFs) of heavy duty vehicles. Previous studies show that ash packing density, ash flow and how the ash is distributed inside a DPF is dependent on parameters such as temperature, exhaust flow profiles and how much oil a vehicle consumes. There is reason to believe that these parameters are also affected by how a vehicle is operated which is why different operational parameters were analysed for correlation with the measured ash numbers. The operational parameters that was investigated in this study was average speed, number of stops per 100 km, idling percentage and fuel consumption. The study started with method development of measuring ash weights of DPFs and compared three different methods, named I, II and III. Method II, which relies on weighing a piece of a filter substrate before and after cleaning the filter piece from ash with pressurized air was chosen as the most reliable and useful method as it was faster, needed less of each DPF to complete the analysis and could be used when analysing DPF samples that had not been investigated previous to its use in a vehicle. The ash weight, together with the volumetric filling degree and known inlet volume of the DPF was used to calculate the ash packing density. The filling degree and ash distribution profile was measured with an image analysis of microscope images of sawed cross sections of the filter piece. The correlation study was then performed with these methods and correlated with operational data extracted from databases at Scania CV. To study which parameters were correlated to each other a primal component analysis (PCA) was performed with the operational and measured variables as a matrix of data. The PCA analysis showed that three primal components made up >90 % of variation in the data and that plug/wall ratio, which is a numerical value of the ash distribution, was strongly positively correlated with average speed of a vehicle and negatively correlated with number of stops, idling percentage and fuel consumption. Furthermore, ash flow showed a slight positive correlation with idling percentage, number of stops and fuel consumption while oil consumption showed an even slighter correlation with these parameters. This indicates that the oil consumption cannot be taken as a constant value as percentage of fuel consumption when calculating service intervals of DPFs. The ash packing density showed none to very low correlation with any other variables in the study, which could depend on the fact that the DPFs with high percentage of wall ash had a significantly lower runtime which could mean that the ash has not had time to be packed tightly in the filter channels.
87

Daily pattern recognition of dynamic origin-destination matrices using clustering and kernel principal component analysis / Daglig mönsterigenkänning av dynamiska Origin-Destination-matriser med hjälp av clustering och kernel principal component analysis

Dong, Zhiwu January 2021 (has links)
Origin-Destination (OD) matrix plays an important role in traffic management and urban planning. However, the OD estimation demands large data collection which has been done in past mostly by surveys with numerous limitations. With the development of communication technology and artificial intelligence technology, the transportation industry experiences new opportunities and challenges. Sensors bring big data characterized by 4V (Volume, Variety, Velocity, Value) to the transportation domain. This allows traffic practitioners to receive data covering large-scale areas and long time periods, even several years of data. At the same time, the introduction of artificial intelligence technology provides new opportunities and challenges in processing massive data. Advances from computer science have also brought revolutionary advancements in the field of transportation. All these new advances and technologies enable large data collection that can be used for extracting and estimating dynamic OD matrices for small time intervals and long time periods.Using Stockholm as the focus of the case study, this thesis estimates dynamic OD matrices covering data collected from the tolls located around Stockholm municipality. These dynamic OD matrices are used to analyze the day-to-day characteristics of the traffic flow that goes through Stockholm. In other words, the typical day-types of traffic through the city center are identified and studied in this work. This study analyzes the data collected by 58 sensors around Stockholm containing nearly 100 million vehicle observations (12GB).Furthermore, we consider and study the effects of dimensionality reduction on the revealing of most common day-types by clustering. The considered dimensionality reduction techniques are Principal Component Analysis (PCA) and its variant Kernel PCA (KPCA). The results reveal that dimensionality reduction significantly drops computational costs while resulting in reasonable day-types. Day-type clusters reveal expected as unexpected patterns and thus could have potential in traffic management, urban planning, and designing the strategy for congestion tax. / Origin-Destination (OD) -matrisen spelar en viktig roll i trafikledning och stadsplanering. Emellertid kräver OD-uppskattningen stor datainsamling, vilket har gjorts tidigare mest genom enkäter med många begränsningar. Med utvecklingen av kommunikationsteknik och artificiell intelligens upplever transportindustrin nya möjligheter och utmaningar. Sensorer ger stor data som kännetecknas av 4V (på engelska, volym, variation, hastighet, värde) till transportdomänen. Detta gör det möjligt för trafikutövare att ta emot data som täcker storskaliga områden och långa tidsperioder, till och med flera års data. Samtidigt ger introduktionen av artificiell intelligens teknik nya möjligheter och utmaningar i behandlingen av massiva data. Datavetenskapens framsteg har också lett till revolutionära framsteg inom transportområdet. Alla dessa nya framsteg och tekniker möjliggör stor datainsamling som kan användas för att extrahera och uppskatta dynamiska OD-matriser under små tidsintervall och långa tidsperioder.Genom att använda Stockholm som fokus för fallstudien uppskattar denna avhandling dynamiska OD-matriser som täcker data som samlats in från vägtullarna runt Stockholms kommun. Dessa dynamiska OD-matriser används för att analysera de dagliga egenskaperna hos trafikflödet i Stockholm genom stadens centrum. Med andra ord känns igen och studeras de typiska dagtyperna av trafik genom stadens centrum i detta arbete. Denna studie analyserar data som samlats in av 58 sensorer runt Stockholm som innehåller nästan 100 miljoner fordonsobservationer (12 GB)Dessutom överväger och studerar vi effekterna av dimensioneringsreduktion på avslöjandet av de vanligaste dagtyperna genom kluster. De betraktade dimensioneringsreduktionsteknikerna är Principal Component Analysis (PCA) och dess variant Kernel PCA (KPCA). Resultaten avslöjar att dimensioneringsreduktion avsevärt minskar beräkningskostnaderna, samtidigt som det ger rimliga dagtyper. Dagstyp kluster avslöjar förväntade som oväntade mönster och därmed kan ha potential i trafikledning, stadsplanering och utformning av strategin för trängselskatt.
88

Sparse Principal Component Analysis for High-Dimensional Data: A Comparative Study

Bonner, Ashley J. 10 1900 (has links)
<p><strong>Background:</strong> Through unprecedented advances in technology, high-dimensional datasets have exploded into many fields of observational research. For example, it is now common to expect thousands or millions of genetic variables (p) with only a limited number of study participants (n). Determining the important features proves statistically difficult, as multivariate analysis techniques become flooded and mathematically insufficient when n < p. Principal Component Analysis (PCA) is a commonly used multivariate method for dimension reduction and data visualization but suffers from these issues. A collection of Sparse PCA methods have been proposed to counter these flaws but have not been tested in comparative detail. <strong>Methods:</strong> Performances of three Sparse PCA methods were evaluated through simulations. Data was generated for 56 different data-structures, ranging p, the number of underlying groups and the variance structure within them. Estimation and interpretability of the principal components (PCs) were rigorously tested. Sparse PCA methods were also applied to a real gene expression dataset. <strong>Results:</strong> All Sparse PCA methods showed improvements upon classical PCA. Some methods were best at obtaining an accurate leading PC only, whereas others were better for subsequent PCs. There exist different optimal choices of Sparse PCA methods when ranging within-group correlation and across-group variances; thankfully, one method repeatedly worked well under the most difficult scenarios. When applying methods to real data, concise groups of gene expressions were detected with the most sparse methods. <strong>Conclusions:</strong> Sparse PCA methods provide a new insightful way to detect important features amidst complex high-dimension data.</p> / Master of Science (MSc)
89

Analysis of pavement condition data employing Principal Component Analysis and sensor fusion techniques

Rajan, Krithika January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Dwight D. Day / Balasubramaniam Natarajan / This thesis presents an automated pavement crack detection and classification system via image processing and pattern recognition algorithms. Pavement crack detection is important to the Departments of Transportation around the country as it is directly related to maintenance of pavement quality. Manual inspection and analysis of pavement distress is the prevalent method for monitoring pavement quality. However, inspecting miles of highway sections and analyzing each is a cumbersome and time consuming process. Hence, there has been research into automating the system of crack detection. In this thesis, an automated crack detection and classification algorithm is presented. The algorithm is built around the statistical tool of Principal Component Analysis (PCA). The application of PCA on images yields the primary features of cracks based on which, cracked images are distinguished from non-cracked ones. The algorithm consists of three levels of classification: a) pixel-level b) subimage (32 X 32 pixels) level and c) image level. Initially, at the lowermost level, pixels are classified as cracked/non-cracked using adaptive thresholding. Then the classified pixels are grouped into subimages, for reducing processing complexity. Following the grouping process, the classification of subimages is validated based on the decision of a Bayes classifier. Finally, image level classification is performed based on a subimage profile generated for the image. Following this stage, the cracks are further classified as sealed/unsealed depending on the number of sealed and unsealed subimages. This classification is based on the Fourier transform of each subimage. The proposed algorithm detects cracks aligned in longitudinal as well as transverse directions with respect to the wheel path with high accuracy. The algorithm can also be extended to detect block cracks, which comprise of a pattern of cracks in both alignments.
90

Investigation of the elemental profiles of Hypericum perforatum as used in herbal remedies

Owen, Jade Denise January 2014 (has links)
The work presented in this thesis has demonstrated that the use of elemental profiles for the quality control of herbal medicines can be applied to multiple stages of processing. A single method was developed for the elemental analysis of a variety of St John’s Wort (Hypericum perforatum) preparations using Inductively Coupled Plasma – Optical Emission Spectroscopy (ICP-OES). The optimised method consisted of using 5 ml of nitric acid and microwave digestion reaching temperatures of 185⁰C. Using NIST Polish tea (NIST INCT-TL- 1) the method was found to be accurate and the matrix effect from selected St John’s Wort (SJW) preparations was found to be ≤22%. The optimised method was then used to determine the elemental profiles for a larger number of SJW preparations (raw herbs=22, tablets=20 and capsules=12). Specifically, the method was used to determine the typical concentrations of 25 elements (Al, As, B, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, Hg, In, Mg, Mn, Mo, Ni, Pb, Pt, Sb, Se, Sr, V, Y and Zn) for each form of SJW which ranged from not detected to 200 mg/g. To further interpret the element profiles, Principal Component Analysis (PCA) was carried out. This showed that different forms of SJW could be differentiated based on their elemental profile and the SJW ingredient used (i.e. extract or raw herb) identified. The differences in the profiles were likely due to two factors: (1) the addition of bulking agents and (2) solvent extraction. In order to further understand how the elemental profile changes when producing the extract from the raw plant, eight SJW herb samples were extracted with four solvents (100% water, 60% ethanol, 80% ethanol and 100% ethanol) and analysed for their element content. The results showed that the transfer of elements from the raw herb to an extract was solvent and metal dependent. Generally the highest concentrations of an element were extracted with 100% water, which decreased as the concentration of ethanol increased. However, the transfer efficiency for the element Cu was highest with 60% ethanol. The solvents utilised in industry (60% and 80% ethanol) were found to preconcentrate some elements; Cu (+119%), Mg (+93%), Ni (+183%) and Zn (+12%) were found to preconcentrate in 60 %v/v ethanol extracts and Cu (+5%) and Ni (+30%). PCA of the elemental profiles of the four types of extract showed that differentiation was observed between the different solvents and as the ethanol concentration increased, the extracts became more standardised. Analysis of the bioactive compounds rutin, hyperoside, quercetin, hyperforin and adhyperforin followed by subsequent Correlation Analysis (CA) displayed relationships between the elemental profiles and the molecular profiles. For example strong correlations were seen between hyperoside and Cr as well as Quercetin and Fe. This shows potential for tuning elemental extractions for metal-bioactive compounds for increased bioactivity and bioavailability; however further work in needed in this area.

Page generated in 0.0906 seconds