• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Metabolomic insights into the pharmacological and genetic inhibition of cyclooxygenase-2

Briggs, William Thomas Edward January 2017 (has links)
Metabolomic Insights into the Pharmacological and Genetic Inhibition of Cyclooxygenase-2 William T. E. Briggs The cyclooxygenase (COX)-2 inhibitors, or “coxibs,” are excellent anti-inflammatory agents, but their reputation has been tarnished by the adverse cardiovascular (CV) events, including heart failure (HF), with which they are associated. Whilst the risk of HF represents the greatest adverse CV event signal seen with these compounds, it is also perhaps the least well understood and has often been explained away as a consequence of the thrombotic risk with which the coxibs are also associated. One recent hypothesis, put forward by Ahmetaj-Shala et al., suggests that asymmetric dimethylarginine (ADMA) may serve as a mechanistic bridge between COX-2 inhibition and HF. However, the ADMA-COX-2 hypothesis was developed based on findings in a constitutive mouse model of COX-2 knock-out (KO), which is compromised by severe developmental cardio-renal pathology, and pharmacological studies which may not accurately reflect coxib use in clinical practice. Various studies have explored the metabolic changes induced by coxib treatment. However, these studies have been limited in scope and have tended to focus on specific pathways or certain tissues/bio-fluids. This has left large regions of the metabolome, in the context of coxib-treatment, unexplored. Given that metabolic remodelling is a key feature of HF, changes in these metabolites may hold the key to understanding the pathogenesis of coxib-induced HF. L-Carnitine shuttles activated long-chain fatty acids (FAs) across the inner mitochondrial membrane to the mitochondrial matrix, where they are oxidised by β-oxidation. This is especially important in the heart, which derives the majority of its energy from the metabolism of FAs. Changes in carnitine metabolism are also seen in HF. It is therefore biologically plausible that derangements in carnitine metabolism may contribute to the pathogenesis of coxib-induced HF. This thesis employs a combination of targeted and untargeted metabolomic techniques, stable isotope labelling and quantitative reverse transcription polymerase chain reaction (RT-qPCR) to i) profile the metabolic changes induced by celecoxib and rofecoxib, in the mouse; ii) specifically interrogate the effect of celecoxib, rofecoxib and global COX-2 gene deletion on carnitine synthesis, metabolism and shuttling, and iii) explore the advantages and disadvantages of the inducible post-natal global (IPNG) COX-2 KO (COX-2-/-) mouse, an alternative to the constitutive COX-2-/- mouse used by Ahmetaj-Shala et al. The results of this thesis demonstrate that i) celecoxib and rofecoxib have similar metabolomic consequences in the mouse; ii) carnitine metabolism may be affected by celecoxib, rofecoxib and dietary composition, via a peroxisome proliferator-activated receptor-alpha (PPAR-α) mediated effect on hepatic carnitine synthesis and iii) the IPNG COX-2-/- mouse neither exhibits the severe developmental cardio-renal pathology nor the altered ADMA metabolism observed in the constitutive COX-2-/- mouse. These findings contradict those of Ahmetaj-Shala et al., oppose the ADMA-COX-2 hypothesis and highlight a potential role for carnitine metabolism and diet in coxib induced HF.
2

Multi-Variate Time Series Similarity Measures and Their Robustness Against Temporal Asynchrony

January 2015 (has links)
abstract: The amount of time series data generated is increasing due to the integration of sensor technologies with everyday applications, such as gesture recognition, energy optimization, health care, video surveillance. The use of multiple sensors simultaneously for capturing different aspects of the real world attributes has also led to an increase in dimensionality from uni-variate to multi-variate time series. This has facilitated richer data representation but also has necessitated algorithms determining similarity between two multi-variate time series for search and analysis. Various algorithms have been extended from uni-variate to multi-variate case, such as multi-variate versions of Euclidean distance, edit distance, dynamic time warping. However, it has not been studied how these algorithms account for asynchronous in time series. Human gestures, for example, exhibit asynchrony in their patterns as different subjects perform the same gesture with varying movements in their patterns at different speeds. In this thesis, we propose several algorithms (some of which also leverage metadata describing the relationships among the variates). In particular, we present several techniques that leverage the contextual relationships among the variates when measuring multi-variate time series similarities. Based on the way correlation is leveraged, various weighing mechanisms have been proposed that determine the importance of a dimension for discriminating between the time series as giving the same weight to each dimension can led to misclassification. We next study the robustness of the considered techniques against different temporal asynchronies, including shifts and stretching. Exhaustive experiments were carried on datasets with multiple types and amounts of temporal asynchronies. It has been observed that accuracy of algorithms that rely on data to discover variate relationships can be low under the presence of temporal asynchrony, whereas in case of algorithms that rely on external metadata, robustness against asynchronous distortions tends to be stronger. Specifically, algorithms using external metadata have better classification accuracy and cluster separation than existing state-of-the-art work, such as EROS, PCA, and naive dynamic time warping. / Dissertation/Thesis / Masters Thesis Computer Science 2015
3

Leveraging Metadata for Extracting Robust Multi-Variate Temporal Features

January 2013 (has links)
abstract: In recent years, there are increasing numbers of applications that use multi-variate time series data where multiple uni-variate time series coexist. However, there is a lack of systematic of multi-variate time series. This thesis focuses on (a) defining a simplified inter-related multi-variate time series (IMTS) model and (b) developing robust multi-variate temporal (RMT) feature extraction algorithm that can be used for locating, filtering, and describing salient features in multi-variate time series data sets. The proposed RMT feature can also be used for supporting multiple analysis tasks, such as visualization, segmentation, and searching / retrieving based on multi-variate time series similarities. Experiments confirm that the proposed feature extraction algorithm is highly efficient and effective in identifying robust multi-scale temporal features of multi-variate time series. / Dissertation/Thesis / M.S. Computer Science 2013
4

Search for the production of the Higgs boson associated with a pair of top quarks with the Atlas detector at the LHC / Recherche de la production du boson de Higgs associé à une paire de quark top avec le détecteur Atlas auprès du LHC

Wang, Chao 06 December 2017 (has links)
La production du boson de Higgs associée à une paire de quarks top est l'un des modes de production de boson de Higgs les plus importants bien que toujours pas encore observé. Par conséquent, sa découverte est l'une des recherches les plus ambitieuse après la découverte Higgs: non seulement cela sera la première fois que nous pourrons observer ce mode de production du Higgs mais nous pourrons également en mesurer le couplage de Yukawa au quark top. Les résultats de ces mesures peuvent répondre aux questions fondamentales du Modèle Standard (MS) et peuvent également donner des indices de nouvelle physique au-delà du MS. Une analyse de la recherche de la production de boson de Higgs associée à une paire de quarks top dans des états finaux à trois leptons est présentée dans cette thèse. Cette analyse est réalisée avec des données collectées par le détecteur ATLAS en 2015 et 2016 pendant la campagne dite « Run 2 » et correspondant à une luminosité intégrée de 36.1 fb-1 à une énergie dans le centre de masse de 13 TeV. Elle utilise un algorithme d'arbre de décision renforcé pour discriminer le signal et le fond. Le bruit de fond dominant de faux leptons est estimé avec une méthode de matrice s’appuyant sur les données (Méthode de la Matrix). Pour un Higgs standard de 125 GeV, un excès d'événements par rapport au bruit de fond attendu d'autres processus MS est trouvé avec une signification observée de 2.2 écarts-types, comparé à une prédiction de 1.5 écart-type. Le meilleur ajustement pour la section efficace de production $t\bar tH$ est de $1.5^{+0.8}_{-0.7}$ fois l'espérance SM, consistant avec la valeur SM du couplage de Yukawa au quark top. / The production of the Higgs boson associated with a pair of top quarks is one of the most important Higgs boson production modes yet still not observed. Therefore, its discovery is one of the most challenging searches after the Higgs discovery: not only will it be the first time we can observe this Higgs production mode but also we will be able to measure its Yukawa coupling to the top quark. The measured results can answer the basic question of the Standard Model (SM) and can also search for any hints of new physics beyond the SM prediction. An analysis searching for the production of the Higgs boson associated with a pair of top quarks in three leptons final states is presented in this thesis. It is performed with the data collected by the ATLAS detector in 2015 and 2016 during the so-called « Run 2 » campaign corresponding to an integrated luminosity of 36.1 fb−1 at a center of mass energy of 13 TeV. It uses a boosted decision tree algorithm to discriminate between signal and background. The dominant background of fake leptons is estimated with the data-driven matrix method (Matrix Method). For a 125 GeV Standard Model Higgs boson, an excess of events over the expected background from other SM processes is found with an observed significance of 2.2 standard deviations, compared to an expectation of 1.5 standard deviations. The best fit for the $t\bar tH$ production cross section is $1.5^{+0.8}_{-0.7}$ times the SM expectation, consistent with the SM value of the Yukawa coupling to top quarks.
5

Enterprise finance crisis forecast- Constructing industrial forcast model by Artificial Neural Network model

Huang, Chih-li 14 June 2007 (has links)
The enterprise finance crisis forecast could provide alarm to managers and investors of the enterprise, many scholars advised different alarm models to explain and predict the enterprise is facing finance crisis or not. These models can be classified into three categories by analysis method, the first is single-variate model, it¡¦s easy to implement. The second is multi-variate model which need to fit some statistical assumption, and the third is Artificial Neural Network model which doesn¡¦t need to fit any statistical assumption. However, these models do not consider the industrial effect, different industry could have different finance crisis pattern. This study uses the advantage of Artificial Neural Network to build the process of the enterprise finance crisis forecast model, because it doesn¡¦t need to fit any statistical assumption. Finally, the study use reality finance data to prove the process, and compare with the other models. The result shows the model issued by this study is suitable in Taiwan Electronic Industry, but the performance in Taiwan architecture industry is not better than other models.
6

Quality Data Management in the Next Industrial Revolution : A Study of Prerequisites for Industry 4.0 at GKN Aerospace Sweden

Erkki, Robert, Johnsson, Philip January 2018 (has links)
The so-called Industry 4.0 is by its agitators commonly denoted as the fourth industrial revolution and promises to turn the manufacturing sector on its head. However, everything that glimmers is not gold and in the backwash of hefty consultant fees questions arises: What are the drivers behind Industry 4.0? Which barriers exists? How does one prepare its manufacturing procedures in anticipation of the (if ever) coming era? What is the internet of things and what file sizes’ is characterised as big data? To answer these questions, this thesis aims to resolve the ambiguity surrounding the definitions of Industry 4.0, as well as clarify the fuzziness of a data-driven manufacturing approach. Ergo, the comprehensive usage of data, including collection and storage, quality control, and analysis. In order to do so, this thesis was carried out as a case study at GKN Aerospace Sweden (GAS). Through interviews and observations, as well as a literature review of the subject, the thesis examined different process’ data-driven needs from a quality management perspective. The findings of this thesis show that the collection of quality data at GAS is mainly concerned with explicitly stated customer requirements. As such, the data available for the examined processes is proven inadequate for multivariate analytics. The transition towards a data-driven state of manufacturing involves a five-stage process wherein data collection through sensors is seen as a key enabler for multivariate analytics and a deepened process knowledge. Together, these efforts form the prerequisites for Industry 4.0. In order to effectively start transition towards Industry 4.0, near-time recommendations for GAS includes: capture all data, with emphasize on process data; improve the accessibility of data; and ultimately taking advantage of advanced analytics. Collectively, these undertakings pave the way for the actual improvements of Industry 4.0, such as digital twins, machine cognition, and process self-optimization. Finally, due to the delimitations of the case study, the findings are but generalized for companies with similar characteristics, i.e. complex processes with low volumes.
7

Time Series Analysis Of Neurobiological Signals

Hariharan, N 10 1900 (has links) (PDF)
No description available.
8

Patient simulation. : Generation of a machine learning “inverse” digital twin. / Patientsimulering. : Generering av en digital tvilling med hjälp av maskininlärning.

Calderaro, Paolo January 2022 (has links)
In the medtech industry models of the cardiiovascular systems and simulations are valuable tools for the development of new products ad therapies. The simulator Aplysia has been developed over several decade and is able to replicate a wide range of phenomena involved in the physiology and pathophysiology of breathing and circulation. Aplysia is also able to simulate the hemodynamics phenomena starting from a set of patient model parameters enhancing the idea of a "digital twin", i.e. a patient-specific representative simulation. Having a good starting estimate of the patient model parameters is a crucial aspect to start the simulation. A first estimate can be given by looking at patient monitoring data but medical expertise is required. The goal of this thesis is to address the parameter estimation task by developing machine learning and deep learning model to give an estimate of the patient model parameter starting from a set of time-varying data that we will refers as state variables. Those state variables are descriptive of a specific patient and for our project we will generate them through Aplysia starting from the simulation presets already available in the framework. Those presets simulates different physiologies, from healthy cases to different cardiovascular diseases. The thesis propose a comparison between a machine learning pipeline and more complex deep learning architecture to simultaneously predicting all the model parameters. This task is referred as Multi Target Regression (MTR) so the performances will be assessed in terms of MTR performance metrics. The results shows that a gradient boosting regressor with a regressor-stacking approach achieve overall good performances, still it shows some lack of performances on some target model parameters. The deep learning architectures did not produced any valuable results because of the amount of our data: to deploy deep architectures such as ResNet or more complex Convolutional Neural Network (CNN) we need more simulations then the one that were done for this thesis work. / Simulatorn Aplysia har under flera decennier utvecklats för forskning och FoU inom området kardiovaskulära systemmodeller och simuleringar och kan idag replikera ett brett spektrum av fenomen involverade i andningens och cirkulationens fysiologi och patofysiologi. Aplysia kan också simulera hemodynamiska fenomen med utgångspunkt från en uppsättning patientmodellparametrar och detta förstärker idén om en digital tvilling", det vill säga en patientspecifik representativ simulering. Att ha en bra startuppskattning av patientmodellens parametrar är en avgörande aspekt för att starta simuleringen. En första uppskattning kan ges genom att titta på patientövervakningsdata men medicinsk expertis krävs för tolkningen av sådana data. Målet med denna mastersuppsats är att addressera parameteruppskattningsuppgiften genom att utveckla maskininlärnings-och djupinlärningsmodeller för att erhålla en uppskattning av patientmodellparametrar utgående från en uppsättning tidsvarierande data som vi kommer att referera till som tillståndsvariabler. Dessa tillståndsvariabler är beskrivande för en specifik patient och för vårt projekt kommer vi att generera dem med hjälp av Aplysia med utgångspunkt från de modellförinställningar som redan finns tillgängliga i ramverket. Dessa förinställningar simulerar olika fysiologier, från friska fall till olika hjärt-kärlsjukdomar. Uppsatsen presenterar en jämförelse mellan en maskininlärningspipeline och en mer komplex djupinlärningsarkitektur för att samtidigt förutsäga alla modellparametrar. Denna uppgift bygger på MTR så resulterande prestanda kommer att bedömas i termer av MTR prestationsmått. Resultaten visar att en gradientförstärkande regressor med en regressor-stacking-metod uppnår överlag goda resultat, ändå visar den en viss brist på prestanda på vissa målmodellparametrar. Deep learning-arkitekturerna gav inga värdefulla resultat på grund av den begränsade mängden av data vi kunde generera. För att träna djupa arkitekturer som ResNet eller mer komplexa CNN behöver vi fler simuleringar än den som gjordes för detta examensarbete.
9

Straight to the Heart : Classification of Multi-Channel ECG-signals using MiniROCKET / Direkt till hjärtat: Klassifiering av fler-kanals EKG med MiniROCKET

Christiansson, Stefan January 2023 (has links)
Machine Learning (ML) has revolutionized various domains, with biomedicine standing out as a major beneficiary. In the realm of biomedicine, Convolutional Neural Networks (CNNs) have notably played a pivotal role since their inception, particularly in applications such as time-series classification. Deep Convolutional Neural Networks (DCNNs) have shown promise in classifying electrocardiogram (ECG) signals. However, their deep architecture leads not only to risk for over-fitting when insufficient data is at hand, but also to large computational costs. This study leverages the efficient architecture of Mini-ROCKET, a variant of CNN, to explore improvements in ECG signal classification at Getinge. The primary objective is to enhance the efficiency of the Electrical Activity of the Diaphragm (Edi) catheter position classification compared to the existing Residual Network (ResNet) approach. In the Intensive Care Unit (ICU), patients are often connected to mechanical ventilators operating based on Edi catheter-detected signals. However, weak or absent EMG signals can occur, necessitating ECG interpretation, which lacks the precision required for optimal Edi catheter placement. Clinicians have long recognized the challenges of manual Edi catheter positioning. Currently, positioning relies on manual interpretation of electromyography (EMG) and ECG signals from a 9-lead electrode array. Given the risk for electrode displacement due to patient movements, continuous monitoring by skilled clinicians is essential. This thesis demonstrates the potential of Mini-ROCKET in addressing these challenges. By training the model on Getinge’s proprietary ECG patient dataset, the study aims to measure improvements in computational cost, accuracy, and user value as compared to previous work with Edicathere positioning at Getinge. The findings of this research hold significant implications for the future of ECG signal classification and the broader application of Mini-ROCKET in medical signal processing. / Maskininlärning har revolutionerat många områden, varav biomedicin som visat enorm utveckling. Inom biomedicin har konvolutionella neurala nätverk (CNNs) gjort stor positiv påverkan, särskilt inom tillämpningar som tidsserieklassificering. Djupa konvolutionella neurala nätverk (DCNNs) har visat lovande resultat inom elektrokardiogram (EKG) klassificering. Deras djupa arkitektur leder dock inte bara till risk för överanpassning med bägränsad data till handa, utan även till betydliga beräkningskostnader. Denna studie utnyttjar den effektiva arkitekturen av Mini-ROCKET, en variant av CNN, för att utforska förbättringar i EKG-signal klassificering på Getinge. Huvudmålet är att förbättra effektiviteten av Edi kateterpositionsklassificering jämfört med den befintliga Residual Network (ResNet) metoden. På intensivvårdsavdelningen (IVA) kopplas patienter ofta till mekaniska ventilatorer som fungerar baserat på Edi-kateter-detekterade signaler. Dock kan svaga eller frånvarande EMG-signaler förekomma, vilket kräver EKG-tolkning, som saknar den precision som krävs för optimal Edikateterplacering. Det är väl känt att det finns svårigheter för kliniker att positionera en matningssond utrustad med elektroder för att mäta Edi. För närvarande bygger positionering på manuell tolkning av elektromyografi (EMG) och EKG-signaler från en uppsättning av 9 elektroder. Med tanke på risken för elektrodförskjutning på grund av patientrörelser är kontinuerlig övervakning av erfarna användare nödvändigt. Denna avhandling visar potentialen av Mini-ROCKET för att ta itu med dessa utmaningar. Genom att träna modellen på Getinges proprietära EKGpatientdataset syftar studien till att mäta förbättringar i beräkningskostnad, noggrannhet och användarnytta jämfört med tidigare arbete inom Edi-kateter positionering på Getinge. Forskningens resultat har betydande implikationer för EKG-signal klassificeringens framtid och den bredare tillämpningen av Mini-ROCKET inom medicinsk signalbehandling.
10

Analysis of dispersion and propagation of fine and ultra fine particle aerosols from a busy road

Gramotnev, Galina January 2007 (has links)
Nano-particle aerosols are one of the major types of air pollutants in the urban indoor and outdoor environments. Therefore, determination of mechanisms of formation, dispersion, evolution, and transformation of combustion aerosols near the major source of this type of air pollution - busy roads and road networks - is one of the most essential and urgent goals. This Thesis addresses this particular direction of research by filling in gaps in the existing physical understanding of aerosol behaviour and evolution. The applicability of the Gaussian plume model to combustion aerosols near busy roads is discussed and used for the numerical analysis of aerosol dispersion. New methods of determination of emission factors from the average fleet on a road and from different types of vehicles are developed. Strong and fast evolution processes in combustion aerosols near busy roads are discovered experimentally, interpreted, modelled, and statistically analysed. A new major mechanism of aerosol evolution based on the intensive thermal fragmentation of nano-particles is proposed, discussed and modelled. A comprehensive interpretation of mutual transformations of particle modes, a strong maximum of the total number concentration at an optimal distance from the road, increase of the proportion of small nano-particles far from the road is suggested. Modelling of the new mechanism is developed on the basis of the theory of turbulent diffusion, kinetic equations, and theory of stochastic evaporation/degradation processes. Several new powerful statistical methods of analysis are developed for comprehensive data analysis in the presence of strong turbulent mixing and stochastic fluctuations of environmental factors and parameters. These methods are based upon the moving average approach, multi-variate and canonical correlation analyses. As a result, an important new physical insight into the relationships/interactions between particle modes, atmospheric parameters and traffic conditions is presented. In particular, a new definition of particle modes as groups of particles with similar diameters, characterised by strong mutual correlations, is introduced. Likely sources of different particle modes near a busy road are identified and investigated. Strong anti-correlations between some of the particle modes are discovered and interpreted using the derived fragmentation theorem. The results obtained in this thesis will be important for accurate prediction of aerosol pollution levels in the outdoor and indoor environments, for the reliable determination of human exposure and impact of transport emissions on the environment on local and possibly global scales. This work will also be important for the development of reliable and scientifically-based national and international standards for nano-particle emissions.

Page generated in 0.0393 seconds