• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 65
  • 21
  • 17
  • 14
  • 13
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Studium negaussovských světelných křivek pomocí Karhunenova-Loveho rozvoje / Studium negaussovských světelných křivek pomocí Karhunenova-Loveho rozvoje

Greškovič, Peter January 2011 (has links)
We present an innovative Bayesian method for estimation of statistical parameters of time series data. This method works by comparing coefficients of Karhunen-Lo\`{e}ve expansion of observed and synthetic data with known parameters. We show one new method for generating synthetic data with prescribed properties and we demonstrate on a numerical example how this method can be used for estimation of physically interesting features in power spectra calculated from observed light curves of some X-ray sources.
32

Analysis of non-stationary (seasonal/cyclical) long memory processes / L'analyse de processus non-stationnaire long mémoire saisonnier et cyclique

Zhu, Beijia 20 May 2013 (has links)
La mémoire longue, aussi appelée la dépendance à long terme (LRD), est couramment détectée dans l’analyse de séries chronologiques dans de nombreux domaines, par exemple,en finance, en économétrie, en hydrologie, etc. Donc l’étude des séries temporelles à mémoire longue est d’une grande valeur. L’introduction du processus ARFIMA (fractionally autoregressive integrated moving average) établit une relation entre l’intégration fractionnaire et la mémoire longue, et ce modèle a trouvé son pouvoir de prévision à long terme, d’où il est devenu l’un des modèles à mémoire longue plus populaires dans la littérature statistique. Précisément, un processus à longue mémoire ARFIMA (p, d, q) est défini comme suit : Φ(B)(I − B)d (Xt − µ) = Θ(B)εt, t ∈ Z, où Φ(z) = 1 − ϕ1z − · · · − ϕpzp et Θ(z) = 1 + · · · + θ1zθpzq sont des polynômes d’ordre p et q, respectivement, avec des racines en dehors du cercle unité; εt est un bruit blanc Gaussien avec une variance constante σ2ε. Lorsque d ∈ (−1/2,1/2), {Xt} est stationnaire et inversible. Cependant, l’hypothèse a priori de la stationnarité des données réelles n’est pas raisonnable. Par conséquent, de nombreux auteurs ont fait leurs efforts pour proposer des estimateurs applicables au cas non-stationnaire. Ensuite, quelques questions se lèvent : quel estimateurs doit être choisi pour applications, et à quoi on doit faire attention lors de l’utilisation de ces estimateurs. Donc à l’aide de la simulation de Monte Carlo à échantillon fini, nous effectuons une comparaison complète des estimateurs semi-paramétriques, y compris les estimateurs de Fourier et les estimateurs d’ondelettes, dans le cadre des séries non-stationnaires. À la suite de cette étude comparative, nous avons que (i) sans bonnes échelles taillées, les estimateurs d’ondelettes sont fortement biaisés et ils ont généralement une performance inférieure à ceux de Fourier; (ii) tous les estimateurs étudiés sont robustes à la présence d’une tendance linéaire en temps dans le niveau de {Xt} et des effets GARCH dans la variance de {Xt}; (iii) dans une situation où le probabilité de transition est bas, la consistance des estimateurs quand même tient aux changements de régime dans le niveau de {Xt}, mais les changements ont une contamination au résultat d’estimation; encore, l’estimateur d’ondelettes de log-regression fonctionne mal dans ce cas; et (iv) en général, l’estimateur complètement étendu de Whittle avec un polynôme locale (fully-extended local polynomial Whittle Fourier estimator) est préféré pour une utilisation pratique, et cet estimateur nécessite une bande (i.e. un nombre de fréquences utilisés dans l’estimation) plus grande que les autres estimateurs de Fourier considérés dans ces travaux. / Long memory, also called long range dependence (LRD), is commonly detected in the analysis of real-life time series data in many areas; for example, in finance, in econometrics, in hydrology, etc. Therefore the study of long-memory time series is of great value. The introduction of ARFIMA (fractionally autoregressive integrated moving average) process established a relationship between the fractional integration and long memory, and this model has found its power in long-term forecasting, hence it has become one of the most popular long-memory models in the statistical literature. Specifically, an ARFIMA(p,d,q) process X, is defined as follows: cD(B)(I - B)d X, = 8(B)c, , where cD(z)=l-~lz-•••-~pzP and 8(z)=1-B1z- .. •-Bqzq are polynomials of order $p$ and $q$, respectively, with roots outside the unit circle; and c, is Gaussian white noise with a constant variance a2 . When c" X, is stationary and invertible. However, the a priori assumption on stationarity of real-life data is not reasonable. Therefore many statisticians have made their efforts to propose estimators applicable to the non-stationary case. Then questions arise that which estimator should be chosen for applications; and what we should pay attention to when using these estimators. Therefore we make a comprehensive finite sample comparison of semi-parametric Fourier and wavelet estimators under the non-stationary ARFIMA setting. ln light of this comparison study, we have that (i) without proper scale trimming the wavelet estimators are heavily biased and the y generally have an inferior performance to the Fourier ones; (ii) ail the estimators under investigation are robust to the presence of a linear time trend in levels of XI and the GARCH effects in variance of XI; (iii) the consistency of the estimators still holds in the presence of regime switches in levels of XI , however, it tangibly contaminates the estimation results. Moreover, the log-regression wavelet estimator works badly in this situation with small and medium sample sizes; and (iv) fully-extended local polynomial Whittle Fourier (fextLPWF) estimator is preferred for a practical utilization, and the fextLPWF estimator requires a wider bandwidth than the other Fourier estimators.
33

Dynamic flux estimation - a novel framework for metabolic pathway analysis

Goel, Gautam 20 August 2009 (has links)
High-throughput time series data characterizing magnitudes of gene expression, levels of protein activity, and the accumulation of select metabolites in vivo are being generated with increased frequency. These time profiles contain valuable information about the structure, dynamics and underlying regulatory mechanisms that govern the behavior of cellular systems. However, extraction and integration of this information into fully functional, computational and explanatory models has been a daunting task. Three types of issues have prevented successful outcomes in this inverse task of system identification. The first type pertains to the algorithmic and computational difficulties encountered in parameter estimation, be it using a genetic algorithm, nonlinear regression, or any other technique. The second type of issues stems from implicit assumptions that are made about the system topology and/or the functional model representing the biological system. These include the choice of intermediate pathway steps to be accounted for in the model, decisions on the irreversibility of a step, and the inclusion of ill-characterized regulatory signals. The third type of issue arises from the fact that there is often no unique set of parameter values, which when fitted to a model, reproduces the observed dynamics under one or several different sets of experimental conditions. This latter issue raises intriguing questions about the validity of the parameter values and the model itself. The central focus of my research has been to design a workflow for parameter estimation and system identification from biological time series data that resolves the issues outlined above. In this thesis I present the theory and application of a novel framework, called Dynamic Flux Estimation (DFE), for system identification from biological time-series data.
34

Detecting Political Framing Shifts and the Adversarial Phrases within\\ Rival Factions and Ranking Temporal Snapshot Contents in Social Media

January 2018 (has links)
abstract: Social Computing is an area of computer science concerned with dynamics of communities and cultures, created through computer-mediated social interaction. Various social media platforms, such as social network services and microblogging, enable users to come together and create social movements expressing their opinions on diverse sets of issues, events, complaints, grievances, and goals. Methods for monitoring and summarizing these types of sociopolitical trends, its leaders and followers, messages, and dynamics are needed. In this dissertation, a framework comprising of community and content-based computational methods is presented to provide insights for multilingual and noisy political social media content. First, a model is developed to predict the emergence of viral hashtag breakouts, using network features. Next, another model is developed to detect and compare individual and organizational accounts, by using a set of domain and language-independent features. The third model exposes contentious issues, driving reactionary dynamics between opposing camps. The fourth model develops community detection and visualization methods to reveal underlying dynamics and key messages that drive dynamics. The final model presents a use case methodology for detecting and monitoring foreign influence, wherein a state actor and news media under its control attempt to shift public opinion by framing information to support multiple adversarial narratives that facilitate their goals. In each case, a discussion of novel aspects and contributions of the models is presented, as well as quantitative and qualitative evaluations. An analysis of multiple conflict situations will be conducted, covering areas in the UK, Bangladesh, Libya and the Ukraine where adversarial framing lead to polarization, declines in social cohesion, social unrest, and even civil wars (e.g., Libya and the Ukraine). / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
35

Does prior knowledge affect a rise or decline in curiosity? : A study on curiosity from an information theoretic perspective

Lind, Tim January 2015 (has links)
To study whether the curiosity can decline or not for a certain task could help us understand how to keep students both interested and engaged in all the different subjects that the education system has to offer. This study aimed to first find a way to measure curiosity, to then see if it changes over time, and if it shows to be different between low performing people and high performing people. 20 people participated at two different sessions. At the first session uncertainty was measured in form of Shannon’s entropy. At the second session participants got to choose between more or less informative options, and then gain feedback depending on the choice. The entropy proved to be a valid predictor for information choice and was used as curiosity measurement in form of a time cost by expected information gain. Patterns in curiosity change over time was found for the sample, low performing participants and high performing participants, where the sample and high performing people showed a significant effect of curiosity decline. / Att studera huruvida nyfikenhet kan avtaga eller ej för en särskild uppgift kan hjälpa oss förstå hur man kan hålla studenter både intresserade och engagerade i de olika ämnena som utbildningssystemet erbjuder. Den här studien siktade på att först finna ett sätt at mäta nyfikenhet, för att sedan se om förändras över tid, samt om det är någon skillnad för låg och högpresterande personer. 20 studenter deltog vid två separata tillfällen. Vid första tillfället mättes osäkerhet i form av Shannon’s entropi. Vid det andra tillfället fick deltagarna välja mellan mer eller mindre informativa val, och få feedback utifrån detta. Entropin visade sig kunna förutsäga om deltagarna valde feedback, och användes därför som mått på nyfikenhet i form av tidskostnad per förväntad informationsvinst. Mönster för nyfikenhetsförändring över tid kunde ses hos populationen, de lågpresterande samt högpresterande deltagarna, där både urvalsgruppen samt de högpresterande deltagarna visade en signifikant effekt av avtagande nyfikenhet.
36

EVALUATION OF UNSUPERVISED MACHINE LEARNING MODELS FOR ANOMALY DETECTION IN TIME SERIES SENSOR DATA

Bracci, Lorenzo, Namazi, Amirhossein January 2021 (has links)
With the advancement of the internet of things and the digitization of societies sensor recording time series data can be found in an always increasing number of places including among other proximity sensors on cars, temperature sensors in manufacturing plants and motion sensors inside smart homes. This always increasing reliability of society on these devices lead to a need for detecting unusual behaviour which could be caused by malfunctioning of the sensor or by the detection of an uncommon event. The unusual behaviour mentioned is often referred to as an anomaly. In order to detect anomalous behaviours, advanced technologies combining mathematics and computer science, which are often referred to as under the umbrella of machine learning, are frequently used to solve these problems. In order to help machines to learn valuable patterns often human supervision is needed, which in this case would correspond to use recordings which a person has already classified as anomalies or normal points. It is unfortunately time consuming to label data, especially the large datasets that are created from sensor recordings. Therefore in this thesis techniques that require no supervision are evaluated to perform anomaly detection. Several different machine learning models are trained on different datasets in order to gain a better understanding concerning which techniques perform better when different requirements are important such as presence of a smaller dataset or stricter requirements on inference time. Out of the models evaluated, OCSVM resulted in the best overall performance, achieving an accuracy of 85% and K- means was the fastest model as it took 0.04 milliseconds to run inference on one sample. Furthermore LSTM based models showed most possible improvements with larger datasets. / Med utvecklingen av Sakernas internet och digitaliseringen av samhället kan man registrera tidsseriedata på allt fler platser, bland annat igenom närhetssensorer på bilar, temperatursensorer i tillverkningsanläggningar och rörelsesensorer i smarta hem. Detta ständigt ökande beroende i samhället av dessa enheter leder till ett behov av att upptäcka ovanligt beteende som kan orsakas av funktionsstörning i sensorn eller genom upptäckt av en ovanlig händelse. Det ovanliga beteendet som nämns kallas ofta för en anomali. För att upptäcka avvikande beteenden används avancerad teknik som kombinerar matematik och datavetenskap, som ofta kallas maskininlärning. För att hjälpa maskiner att lära sig värdefulla mönster behövs ofta mänsklig tillsyn, vilket i detta fall skulle motsvara användningsinspelningar som en person redan har klassificerat som avvikelser eller normala punkter. Tyvärr är det tidskrävande att märka data, särskilt de stora datamängder som skapas från sensorinspelningar. Därför utvärderas tekniker som inte kräver någon handledning i denna avhandling för att utföra anomalidetektering. Flera olika maskininlärningsmodeller utbildas på olika datamängder för att få en bättre förståelse för vilka tekniker som fungerar bättre när olika krav är viktiga, t.ex. närvaro av en mindre dataset eller strängare krav på inferens tid. Av de utvärderade modellerna resulterade OCSVM i bästa totala prestanda, uppnådde en noggrannhet på 85% och K- means var den snabbaste modellen eftersom det hade en inferens tid av 0,04 millisekunder. Dessutom visade LSTM- baserade modeller de bästa möjliga förbättringarna med större datamängder.
37

Pervasive Quantied-Self using Multiple Sensors

January 2019 (has links)
abstract: The advent of commercial inexpensive sensors and the advances in information and communication technology (ICT) have brought forth the era of pervasive Quantified-Self. Automatic diet monitoring is one of the most important aspects for Quantified-Self because it is vital for ensuring the well-being of patients suffering from chronic diseases as well as for providing a low cost means for maintaining the health for everyone else. Automatic dietary monitoring consists of: a) Determining the type and amount of food intake, and b) Monitoring eating behavior, i.e., time, frequency, and speed of eating. Although there are some existing techniques towards these ends, they suffer from issues of low accuracy and low adherence. To overcome these issues, multiple sensors were utilized because the availability of affordable sensors that can capture the different aspect information has the potential for increasing the available knowledge for Quantified-Self. For a), I envision an intelligent dietary monitoring system that automatically identifies food items by using the knowledge obtained from visible spectrum camera and infrared spectrum camera. This system is able to outperform the state-of-the-art systems for cooked food recognition by 25% while also minimizing user intervention. For b), I propose a novel methodology, IDEA that performs accurate eating action identification within eating episodes with an average F1-score of 0.92. This is an improvement of 0.11 for precision and 0.15 for recall for the worst-case users as compared to the state-of-the-art. IDEA uses only a single wrist-band which includes four sensors and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2019
38

Applied Science for Water Quality Monitoring

Khakipoor, Banafsheh 25 August 2020 (has links)
No description available.
39

Using Synthetic Data to ModelMobile User Interface Interactions

Jalal, Laoa January 2023 (has links)
Usability testing within User Interface (UI) is a central part of assuring high-quality UIdesign that provides good user-experiences across multiple user-groups. The processof usability testing often times requires extensive collection of user feedback, preferablyacross multiple user groups, to ensure an unbiased observation of the potential designflaws within the UI design. Attaining feedback from certain user groups has shown tobe challenging, due to factors such as medical conditions that limits the possibilities ofusers to participate in the usability test. An absence of these hard-to-access groups canlead to designs that fails to consider their unique needs and preferences, which maypotentially result in a worse user experience for these individuals. In this thesis, wetry to address the current gaps within data collection of usability tests by investigatingwhether the Generative Adversarial Network (GAN) framework can be used to generatehigh-quality synthetic user interactions of a particular UI gesture across multiple usergroups. Moreover, a collection UI interaction of 2 user groups, namely the elderlyand young population, was conducted where the UI interaction at focus was thedrag-and-drop operation. The datasets, comprising of both user groups were trainedon separate GANs, both using the doppelGANger architecture, and the generatedsynthetic data were evaluated based on its diversity, how well temporal correlations arepreserved and its performance compared to the real data when used in a classificationtask. The experiment result shows that both GANs produces high-quality syntheticresemblances of the drag-and-drop operation, where the synthetic samples show bothdiversity and uniqueness when compared to the actual dataset. The synthetic datasetacross both user groups also provides similar statistical properties within the originaldataset, such as the per-sample length distribution and the temporal correlationswithin the sequences. Furthermore, the synthetic dataset shows, on average, similarperformance achievements across precision, recall and F1 scores compared to theactual dataset when used to train a classifier to distinguish between the elderly andyounger population drag-and-drop sequences. Further research regarding the use ofmultiple UI gestures, using a single GAN to generate UI interactions across multipleuser groups, and performing a comparative study of different GAN architectures wouldprovide valuable insights of unexplored potentials and possible limitations within thisparticular problem domain.
40

Jämförelse av datakomprimeringsalgoritmer för sensordata i motorstyrenheter / Comparison of data compression algorithms for sensordata in engine control units

Möller, Malin, Persson, Dominique January 2023 (has links)
Begränsad processor- och minneskapacitet är en stor utmaning för loggning avsensorsignaler i motorstyrenheter. För att kunna lagra större mängder data i dessakan komprimering användas. För att kunna implementera komprimering imotorstyrenheter krävs det att algoritmerna klarar de begränsningar som finnsgällande processorkapaciteten och ändå kan producera en godtagbarkomprimeringsgrad.Denna avhandling jämför komprimeringsalgoritmer och undersöker vilken ellervilka algoritmer som är bäst lämpade för detta ändamål. Detta i syfte att förbättraloggning och därmed effektivisera felsökning. Detta gjordes genom att utveckla ettsystem som kör olika komprimeringsalgoritmer på samplad sensordata frånmotorstyrenheter och beräknar komprimeringstid och komprimeringsgrad.Resultaten visade att delta-på-delta-komprimering presterade bättre än xorkomprimering för dessa data. Delta-på-delta presterade betydligt bättre gällandekomprimeringsgrad medan skillnaderna i komprimeringstid mellan algoritmernavar marginella. Delta-på-delta-komprimering bedöms ha god potential förimplementering i loggningssystem för motorstyrenheter. Algoritmen bedöms somväl lämpad för loggning av mindre tidsserier vid viktiga händelser, för merkontinuerlig loggning föreslås fortsatta studier för att undersöka hurkomprimeringsgraden kan förbättras ytterligare. / Limited processor and memory capacity is a major challenge for logging sensorsignals in engine control units. In order to be able to store larger amounts of data,compression can be used. To successfully implement compression algorithms inmotor control units, it is essential that the algorithms can effectively handle thelimitations associated with processor capacity while achieving an acceptable level ofcompression.This thesis compares compression algorithms on sensor data from motor controlunits in order to investigate which algorithm(s) are best suited to implement forthis application. The work aims to improve the possibilities of logging sensor dataand thus make the troubleshooting of the engine control units more efficient. Thiswas done by developing a system that performs compression on sampled sensorsignals and calculates the compression time and ratio.The results indicated that delta-of-delta compression performed better than xorcompression for the tested data sets. Delta-of-delta had a significantly bettercompression ratio while the differences between the algorithms regardingcompression time were minor. Delta-of-delta compression was judged to have goodpotential for implementation in engine control unit logging systems. The algorithmis deemed to be well suited for logging smaller time series during important events.For continuous logging of larger time series, further research is suggested in orderto investigate the possibility of improving the compression ratio further.

Page generated in 0.0566 seconds