• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 36
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reducing the Hot Spot Effect in Wireless Sensor Networks with the Use of Mobile Data Sink

Chikhi, Yacine 22 May 2006 (has links)
The Hot Spot effect is an issue that reduces the network lifetime considerably. The network on the field forms a tree structure in which the sink represents the root and the furthest nodes in the perimeter represent the leaves. Each node collects information from the environment and transmits data packets to a "reachable" node towards the sink in a multi-hop fashion. The closest nodes to the sink not only transmit their own packets but also the packets that they receive from "lower" nodes and therefore exhaust their energy reserves and die faster than the rest of the network sensors. We propose a technique to allow the data sink to identify nodes severely suffering from the Hot Spot effect and to move beyond these nodes. We will explore the best trajectory that the data sink should follow. Performance results are presented to support our claim of superiority for our scheme.
2

Left-Incompatible Term Rewriting Systems and Functional Strategy

SAKAI, Masahiko 12 1900 (has links)
No description available.
3

Index Reduction of Overlapping Strongly Sequential Systems

TOYAMA, Yoshihito, SAKAI, Masahiko, NAGAYA, Takashi 20 May 1998 (has links)
No description available.
4

Práticas normalizadoras na educação especial: um estudo a partir da rede municipal de ensino de Novo Hamburgo - RS (1950 a 2007)

Sardagna, Helena Venites 15 December 2008 (has links)
Made available in DSpace on 2015-03-04T21:16:01Z (GMT). No. of bitstreams: 0 Previous issue date: 15 / Nenhuma / A presente tese buscou problematizar as condições para a emergência da Educação Especial e as ênfases nas práticas percebidas ao longo dos anos no contexto da Rede Municipal de Ensino de Novo Hamburgo e na Revista do Ensino do Rio Grande do Sul, no período de 1950 a 2007. Tal empreendimento permitiu analisar a Educação Especial como uma modalidade da Educação Escolar que, ao governar os sujeitos posicionados nessa modalidade, diretamente e indiretamente com a inclusão, governa todos. Também foi possível problematizar a norma como articuladora desses processos de normalização, que, no contexto da biopolítica, são operados por tecnologias de regulamentação da população posicionada na Educação Especial. Esse movimento possibilitou questionar as políticas de inclusão escolar que, na articulação com as práticas, colocam em funcionamento mecanismos necessários para assegurar a normalização de crianças e jovens. A abordagem da pesquisa aproxima-se da perspectiva pós-estruturalista e utiliza os conceitos foucaultiano / This thesis has attempted to problematize the conditions for the emergence of Special Education, and emphases on practices perceived over the years in the context of Rede Municipal de Ensino de Novo Hamburgo and in Revista do Ensino do Rio Grande do Sul, from 1950 to 2007. Such work allowed us to analyze of Special Education as a modality of School Education which, while governs the positioned subjects in this way, directly and indirectly with the inclusion, governs all of them. The study has also problematized norm as articulating processes of normalization that, in the context of bio-politics, are operated by technologies of regulation of the Special Education population. This movement has enabled questioning policies of school inclusion that, in articulation with practices, trigger the mechanisms that are necessary to ensure normalization of children and youths. The research approach approximates to the post-structuralist perspective, and uses Foucauldian concepts of discourse and normalization as its anal
5

Sobre a imersão de módulos com comprimento finito em módulos injetivos com comprimento finito

Lozada, John Freddy Moreno January 2016 (has links)
Nesta dissertação estudamos sob que condições um módulo de comprimento finito pode ser imerso em um módulo injetivo de comprimento finito. Também apresentamos a caracterização, dada por Hirano em [8], para os anéis sobre os quais todo módulo de comprimento finito tem um fecho injetivo de comprimento finito, os chamados de ¶-V-anéis. Além disso, mostramos que as extensões normais finitas de ¶-V-anéis são também ¶-V-anéis. / In this dissertation we study under what conditions a module of finite length can be embedded in an injective module of finite length. Also, we present a charactization, given by Hirano in [8], for the rings over which all module of finite length has an injective hull of finite length, the so called ¶-V-rings. Moreover, we show that finite normalizing extensions of ¶-V-rings are also ¶-V-rings.
6

Sobre a imersão de módulos com comprimento finito em módulos injetivos com comprimento finito

Lozada, John Freddy Moreno January 2016 (has links)
Nesta dissertação estudamos sob que condições um módulo de comprimento finito pode ser imerso em um módulo injetivo de comprimento finito. Também apresentamos a caracterização, dada por Hirano em [8], para os anéis sobre os quais todo módulo de comprimento finito tem um fecho injetivo de comprimento finito, os chamados de ¶-V-anéis. Além disso, mostramos que as extensões normais finitas de ¶-V-anéis são também ¶-V-anéis. / In this dissertation we study under what conditions a module of finite length can be embedded in an injective module of finite length. Also, we present a charactization, given by Hirano in [8], for the rings over which all module of finite length has an injective hull of finite length, the so called ¶-V-rings. Moreover, we show that finite normalizing extensions of ¶-V-rings are also ¶-V-rings.
7

Sobre a imersão de módulos com comprimento finito em módulos injetivos com comprimento finito

Lozada, John Freddy Moreno January 2016 (has links)
Nesta dissertação estudamos sob que condições um módulo de comprimento finito pode ser imerso em um módulo injetivo de comprimento finito. Também apresentamos a caracterização, dada por Hirano em [8], para os anéis sobre os quais todo módulo de comprimento finito tem um fecho injetivo de comprimento finito, os chamados de ¶-V-anéis. Além disso, mostramos que as extensões normais finitas de ¶-V-anéis são também ¶-V-anéis. / In this dissertation we study under what conditions a module of finite length can be embedded in an injective module of finite length. Also, we present a charactization, given by Hirano in [8], for the rings over which all module of finite length has an injective hull of finite length, the so called ¶-V-rings. Moreover, we show that finite normalizing extensions of ¶-V-rings are also ¶-V-rings.
8

Modeling Structured Data with Invertible Generative Models

Lu, You 01 February 2022 (has links)
Data is complex and has a variety of structures and formats. Modeling datasets is a core problem in modern artificial intelligence. Generative models are machine learning models, which model datasets with probability distributions. Deep generative models combine deep learning with probability theory, so that can model complicated datasets with flexible models. They have become one of the most popular models in machine learning, and have been applied to many problems. Normalizing flows are a novel class of deep generative models that allow efficient exact likelihood calculation, exact latent variable inference and sampling. They are constructed using functions whose inverse and Jacobian determinant can be efficiently computed. In this paper, we develop normalizing flow based generative models to model complex datasets. In general, data can be categorized to unlabeled data, labeled data, and weakly labeled data. We develop models for these three types of data, respectively. First, we develop Woodbury transformations, which are flow layers for general unsupervised normalizing flows, and can improve the flexibility and scalability of current flow based models. Woodbury transformations achieve efficient invertibility via Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages. Second, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning, which is an advanced variant of supervised learning with structured labels. Traditional structured prediction models try to learn a conditional likelihood, i.e., p(y|x), to capture the relationship between the structured output y and the input features x. For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. C-Glow benefits from the ability of flow-based models to compute p(y|x) exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples. We develop a sample-based prediction method, which can use this advantage to do efficient and effective inference. In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the state-of-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is applicable to many different structured prediction problems. Third, we develop label learning flows (LLF), which is a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many state-of-the-art alternatives. Our research shows the advantages and versatility of normalizing flows. / Doctor of Philosophy / Data is now more affordable and accessible. At the same time, datasets are more and more complicated. Modeling data is a key problem in modern artificial intelligence and data analysis. Deep generative models combine deep learning and probability theory, and are now a major way to model complex datasets. In this dissertation, we focus on a novel class of deep generative model--normalizing flows. They are becoming popular because of their abilities to efficiently compute exact likelihood, infer exact latent variables, and draw samples. We develop flow-based generative models for different types of data, i.e., unlabeled data, labeled data, and weakly labeled data. First, we develop Woodbury transformations for unsupervised normalizing flows, which improve the flexibility and expressiveness of flow based models. Second, we develop conditional generative flows for an advanced supervised learning problem -- structured output learning, which removes the need of approximations, and surrogate objectives in traditional (deep) structured prediction models. Third, we develop label learning flows, which is a general framework for weakly supervised learning problems. Our research improves the performance of normalizing flows, and extend the applications of them to many supervised and weakly supervised problems.
9

Comparison of Discriminative and Generative Image Classifiers

Budh, Simon, Grip, William January 2022 (has links)
In this report a discriminative and a generative image classifier, used for classification of images with handwritten digits from zero to nine, are compared. The aim of this project was to compare the accuracy of the two classifiers in absence and presence of perturbations to the images. This report describes the architectures and training of the classifiers using PyTorch. Images were perturbed in four ways for the comparison. The first perturbation was a model-specific attack that perturbed images to maximize likelihood of misclassification. The other three image perturbations changed pixels in a stochastic fashion. Furthermore, The influence of training using perturbed images on the robustness of the classifier, against image perturbations, was studied. The conclusions drawn in this report was that the accuracy of the two classifiers on unperturbed images was similar and the generative classifier was more robust against the model-specific attack. Also, the discriminative classifier was more robust against the stochastic noise and was significantly more robust against image perturbations when trained on perturbed images. / I den här rapporten jämförs en diskriminativ och en generativ bildklassificerare, som används för klassificering av bilder med handskrivna siffror från noll till nio. Syftet med detta projekt var att jämföra träffsäkerheten hos de två klassificerarna med och utan störningar i bilderna. Denna rapport beskriver arkitekturerna och träningen av klassificerarna med hjälp av PyTorch. Bilder förvrängdes på fyra sätt för jämförelsen. Den första bildförvrängningen var en modellspecifik attack som förvrängde bilder för att maximera sannolikheten för felklassificering. De andra tre bildförvrängningarna ändrade pixlar på ett stokastiskt sätt. Dessutom studerades inverkan av träning med störda bilder på klassificerarens robusthet mot bildstörningar. Slutsatserna som drogs i denna rapport är att träffsäkerheten hos de två klassificerarna på oförvrängda bilder var likartad och att den generativa klassificeraren var mer robust mot den modellspecifika attacken. Dessutom var den diskriminativa klassificeraren mer robust mot slumpmässiga bildförvrängningar och var betydligt mer robust mot bildstörningar när den tränades på förvrängda bilder. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
10

Statistical Inference for Models with Intractable Normalizing Constants

Jin, Ick Hoon 16 December 2013 (has links)
In this dissertation, we have proposed two new algorithms for statistical inference for models with intractable normalizing constants: the Monte Carlo Metropolis-Hastings algorithm and the Bayesian Stochastic Approximation Monte Carlo algorithm. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. At each iteration, it replaces the unknown normalizing constant ratio by a Monte Carlo estimate. Although the algorithm violates the detailed balance condition, it still converges, as shown in the paper, to the desired target distribution under mild conditions. The BSAMC algorithm works by simulating from a sequence of approximated distributions using the SAMC algorithm. A strong law of large numbers has been established for BSAMC estimators under mild conditions. One significant advantage of our algorithms over the auxiliary variable MCMC methods is that they avoid the requirement for perfect samples, and thus it can be applied to many models for which perfect sampling is not available or very expensive. In addition, although the normalizing constant approximation is also involved in BSAMC, BSAMC can perform very robustly to initial guesses of parameters due to the powerful ability of SAMC in sample space exploration. BSAMC has also provided a general framework for approximated Bayesian inference for the models for which the likelihood function is intractable: sampling from a sequence of approximated distributions with their average converging to the target distribution. With these two illustrated algorithms, we have demonstrated how the SAMCMC method can be applied to estimate the parameters of ERGMs, which is one of the typical examples of statistical models with intractable normalizing constants. We showed that the resulting estimate is consistent, asymptotically normal and asymptotically efficient. Compared to the MCMLE and SSA methods, a significant advantage of SAMCMC is that it overcomes the model degeneracy problem. The strength of SAMCMC comes from its varying truncation mechanism, which enables SAMCMC to avoid the model degeneracy problem through re-initialization. MCMLE and SSA do not possess the re-initialization mechanism, and tend to converge to a solution near the starting point, so they often fail for the models which suffer from the model degeneracy problem.

Page generated in 0.0448 seconds