• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 2
  • 1
  • Tagged with
  • 16
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Investigating Systematics In The Cosmological Data And Possible Departures From Cosmological Principle

Gupta, Shashikant 08 1900 (has links) (PDF)
This thesis contributes to the field of dark energy and observational cosmology. We have investigated possible direction dependent systematic signal and non-Gaussian features in the supernovae (SNe) Type Ia data. To detect these effects we propose a new method of analysis. Although We have used this technique on SNe Ia data, it is quite general and can be applied to other data sets as well. SNe Ia are the most precise known distance indicators at the cosmological distances. Their constant peak luminosity(after correction) makesthem standard candles and hence one can measure the distances in the universe using SNe Ia. This distance measurement can determine various cosmological parameters such as the Hubble constant, various components of matter density and dark energy from, the SNe Ia observations. Recent SNe Ia observations have shown that the expansion of the universe is currently accelerating. This recent acceleration is explained by invoking a component in the universe having negative pressure and is termed as dark energy. It can be described by a homogeneous and isotropic fluid with the equation of state P = wρ, where w is allowed to be negative. A constant(Λ) in the Einstein equation(known as cosmological constant) can explain the acceleration, in the fluid model it can be modeled with w = -1. Other models of dark energy with w = -1 can also explain the acceleration, however the precise nature of this mysterious component remains unknown. Although there exist a wide range of dark energy models, cosmological constant provides the simplest explanation to the acceleration of the expansion of the Universe. The equation of state parameter w has been investigated by recent surveys but the results are still consistent with a wide range of dark energy models. In order to discriminate among various cosmological models we need an even more precise measurement of distance and error bars in the SNe Ia data. From the central limit theorem we expect Gaussian errors in any experiment that is free from systematic noise. However in astronomy we do not have a control over the observed phenomena and thus can not control the systematic errors (due to some physical processes in the Universe) in the observed data. The only possible way to deal with such data is by using appropriate statistical techniques. Among these systematic features the direction dependent features are more dangerous ones since they may indicate a preferred direction in the Universe. To address the issue of direction dependent features we have developed a new technique(Δ statistic henceforth) which is based on the extreme value theory. We have applied this technique to the available high-z SNe Ia data from Riess et al.(2004)and Riess et al.(2007). In addition we have applied it to the HST data from HST key project for H0 measurement. Below we summarize the material presented in the thesis. Chapter wise summary of the thesis In the first chapter we present an introductory discussion of the various basic cosmological notions eg. Cosmological Principle (CP), observational evidence in support of CP and departures from it, distance measures and large scale structure. The observed departures from the CP could be present due to the systematic errors and/or non-Gaussian error bars in the data. We discuss the errors involved in the measurement process Basics of statistical techniques : In the next two chapters we discuss basics of the statistical techniques used in this thesis and extreme value theory. Extreme value theory describes how to calculate the distribution of extreme events. The simplest of the distributions of the extremes is known as the Gumbel distribution. We discuss features of the Gumbel distribution since it is used extensively in our analysis. Δ statistic and features in the SNe data : In the fourth chapter we derive Δ statistic and apply it to the SNe Ia data sets. An outline of the Δ statistic is as follows : a) We define a plane which cuts the sky into hemispheres. This plane will divide the data into two subsets, one in each hemisphere. b) Now we calculate the χ2 in each hemisphere for an FRW universe assuming a flat geometry. c) The difference of χ2 in the two hemisphere is calculated and maximized by rotating the plane. This maximum should follow the Gumbel distribution. Since it is difficult to calculate the analytic form of Gumbel distribution we calculate it numerically assuming Gaussian error bars. This gives the theoretical distribution for the above calculated maximum of difference of χ2 . The results indicate that GD04 shows systematic effects as well non-Gaussian features while the set GD07 is better in terms of systematic effects and non-Gaussian features. Non-Gaussian features in the H0 data : HST key project measures the value of Hubble constant at the level of 10% accuracy, which requires precise measurement of the distances. It uses various methods to measure distance for instance SNe Ia, Tully-Fisher relation, surface-brightness fluctuations etc. In the fifth chapter we apply Δ statistic to the HST Key Project data in order to check the presence of non-Gaussian and direction dependent features. Our results show that although this data set seems to be free of direction dependent features, it is inconsistent with the Gaussian errors. Analytic Marginalization : The quantities of real interest in cosmology are ΩM and ΩΛ, Hubble constant could in principle be treated as a nuisance parameter. It would be useful to marginalize over the nuisance parameter. Although it can be done numerically using Bayesian method, Δ statistic does not allow it. In chapter six we propose a method to marginalize over H0 analytically. The χ2 in this case is a complicated function of errors in the data. We compare this analytic method with the Bayesian marginalization method and results show that the two methods are quite consistent. We apply the Δ statistic to the SNe data after the analytic marginalization. Results do not change much indicating the insensitivity of the direction de-pendent features to the Hubble constant. A variation to the Δ statistic: As has been discussed earlier that, it is difficult to calculate the theoretical distribution of Δ in general. However if the parent distribution follows certain conditions it is possible to derive the analytic form for the Gumbel distribution for Δ. In the seventh chapter we derive a variation to the Δ statistic in a way that allows us to calculate the analytic distribution. The results in this case are different from those presented earlier, but they confirm the same direction dependence and non-Gaussian features in the data.
12

Nové metody pro analýzu spánku a klasifikaci / Novel methods for sleep analysis and classification

Navrátilová, Markéta January 2020 (has links)
Tato diplomová práce se zabývá metodami pro analýzu a klasifikaci spánku. Popisuje jakjednotlivé spánkové fáze a vzorce biosignálů v průběhu spánku, tak metody pro klasifi-kaci. Příznaky jsou extrahovány na dodaných biosignálech ECG, EDA a RIP. Na základětěchto příznaků jsou klasifikovány jednotlivé spánkové fáze s využitím klasifikátoru ná-hodný les. Parametry klasifikátoru jsou optimalizovány a následně jsou vyhodnocenydosažené výsledky. Pomocí metod pro redukci dimenzionality je soubor příznaků analy-zován a výsledky jsou porovnány s výsledky ze standardní klasifikace. Řešení pro vizuali-zaci jak samotných nezpracovaných signálů, tak extrahovaných příznaků je navrhnuto aimplementováno. Dosažené výsledky jsou porovnány s publikovanými metodami.
13

Zobrazení a analýza aktivit neuronové sítě ve skrytých vrstvách / Activity of Neural Network in Hidden Layers - Visualisation and Analysis

Fábry, Marko January 2016 (has links)
Goal of this work was to create system capable of visualisation of activation function values, which were produced by neurons placed in hidden layers of neural networks used for speech recognition. In this work are also described experiments comparing methods for visualisation, visualisations of neural networks with different architectures and neural networks trained with different types of input data. Visualisation system implemented in this work is based on previous work of Mr. Khe Chai Sim and extended with new methods of data normalization. Kaldi toolkit was used for neural network training data preparation. CNTK framework was used for neural network training. Core of this work - the visualisation system was implemented in scripting language Python.
14

Constructing and representing a knowledge graph(KG) for Positive Energy Districts (PEDs)

Davari, Mahtab January 2023 (has links)
In recent years, knowledge graphs(KGs) have become essential tools for visualizing concepts and retrieving contextual information. However, constructing KGs for new and specialized domains like Positive Energy Districts (PEDs) presents unique challenges, particularly when dealing with unstructured texts and ambiguous concepts from academic articles. This study focuses on various strategies for constructing and inferring KGs, specifically incorporating entities related to PEDs, such as projects, technologies, organizations, and locations. We utilize visualization techniques and node embedding methods to explore the graph's structure and content and apply filtering techniques and t-SNE plots to extract subgraphs based on specific categories or keywords. One of the key contributions is using the longest path method, which allows us to uncover intricate relationships, interconnectedness between entities, critical paths, and hidden patterns within the graph, providing valuable insights into the most significant connections. Additionally, community detection techniques were employed to identify distinct communities within the graph, providing further understanding of the structural organization and clusters of interconnected nodes with shared themes. The paper also presents a detailed evaluation of a question-answering system based on the KG, where the Universal Sentence Encoder was used to convert text into dense vector representations and calculate cosine similarity to find similar sentences. We assess the system's performance through precision and recall analysis and conduct statistical comparisons of graph embeddings, with Node2Vec outperforming DeepWalk in capturing similarities and connections. For edge prediction, logistic regression, focusing on pairs of neighbours that lack a direct connection, was employed to effectively identify potential connections among nodes within the graph. Additionally, probabilistic edge predictions, threshold analysis, and the significance of individual nodes were discussed. Lastly, the advantages and limitations of using existing KGs(Wikidata and DBpedia) versus constructing new ones specifically for PEDs were investigated. It is evident that further research and data enrichment is necessary to address the scarcity of domain-specific information from existing sources.
15

"The struggle of memory against forgetting" contemporary fictions and rewriting of histories

Patchay, Sheenadevi January 2008 (has links)
This thesis argues that a prominent concern among contemporary writers of fiction is the recuperation of lost or occluded histories. Increasingly, contemporary writers, especially postcolonial writers, are using the medium of fiction to explore those areas of political and cultural history that have been written over or unwritten by the dominant narrative of “official” History. The act of excavating these past histories is simultaneously both traumatic and liberating – which is not to suggest that liberation itself is without pain and trauma. The retelling of traumatic pasts can lead, as is portrayed in The God of Small Things (1997), to further trauma and pain. Postcolonial writers (and much of the world today can be construed as postcolonial in one way or another) are seeking to bring to the fore stories of the past which break down the rigid binaries upon which colonialism built its various empires, literal and ideological. Such writing has in a sense been enabled by the collapse, in postcolonial and postmodernist discourse, of the Grand Narrative of History, and its fragmentation into a plurality of competing discourses and histories. The associated collapse of the boundary between history and fiction is recognized in the useful generic marker “historiographic metafiction,” coined by Linda Hutcheon. The texts examined in this study are all variants of this emerging contemporary genre. What they also have in common is a concern with the consequences of exile or diaspora. This study thus explores some of the representations of how the exilic experience impinges on the development of identity in the postcolonial world. The identities of “displaced” people must undergo constant change in order to adjust to the new spaces into which they move, both literal and metaphorical, and yet critical to this adjustment is the cultural continuity provided by psychologically satisfying stories about the past. The study shows that what the chosen texts share at bottom is their mutual need to retell the lost pasts of their characters, the trauma that such retelling evokes and the new histories to which they give birth. These texts generate new histories which subvert, enrich, and pre-empt formal closure for the narratives of history which determine the identities of nations.
16

Stylometry: Quantifying Classic Literature For Authorship Attribution : - A Machine Learning Approach

Yousif, Jacob, Scarano, Donato January 2024 (has links)
Classic literature is rich, be it linguistically, historically, or culturally, making it valuable for future studies. Consequently, this project chose a set of 48 classic books to conduct a stylometric analysis on the defined set of books, adopting an approach used by a related work to divide the books into text segments, quantify the resulting text segments, and analyze the books using the quantified values to understand the linguistic attributes of the books. Apart from the latter, this project conducted different classification tasks for other objectives. In one respect, the study used the quantified values of the text segments of the books for classification tasks using advanced models like LightGBM and TabNet to assess the application of this approach in authorship attribution. From another perspective, the study utilized a State-Of-The-Art model, namely, RoBERTa for classification tasks using the segmented texts of the books instead to evaluate the performance of the model in authorship attribution. The results uncovered the characteristics of the books to a reasonable degree. Regarding the authorship attribution tasks, the results suggest that segmenting and quantifying text using stylometric analysis and supervised machine learning algorithms is practical in such tasks. This approach, while showing promise, may still require further improvements to achieve optimal performance. Lastly, RoBERTa demonstrated high performance in authorship attribution tasks.

Page generated in 0.049 seconds