• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 15
  • 12
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 27
  • 15
  • 14
  • 14
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Creating space for fishermen's livelihoods : Anlo-Ewe beach seine fishermen's negotiations for livelihood space within multiple governance structures in Ghana /

Kraan, Marloes, January 2009 (has links)
Diss. Amsterdam : University, 2009. / DVD title: If you do good : beach seine fishing in Ghana.
12

Canine health, disease and death : data from a Swedish animal insurance database /

Egenvall, Agneta, January 1900 (has links) (PDF)
Diss. (sammanfattning) Uppsala : Sveriges lantbruksuniv. / Härtill 5 uppsatser.
13

Latent pattern mixture models for binary outcomes /

Saba, Laura M. January 2007 (has links)
Thesis (Ph.D. in Biostatistics) -- University of Colorado Denver, 2007. / Typescript. Includes bibliographical references (leaves 70-71). Free to UCD affiliates. Online version available via ProQuest Digital Dissertations;
14

A comparison of stratified and unstratified modeling for binary logistic regression in the presence of a simulated interaction

Beebe, Claire Elizabeth. January 2008 (has links) (PDF)
Thesis--University of Oklahoma. / Bibliography: leaves 48-49.
15

Analyzing The Community Structure Of Web-like Networks: Models And Algorithms

Cami, Aurel 01 January 2005 (has links)
This dissertation investigates the community structure of web-like networks (i.e., large, random, real-life networks such as the World Wide Web and the Internet). Recently, it has been shown that many such networks have a locally dense and globally sparse structure with certain small, dense subgraphs occurring much more frequently than they do in the classical Erdös-Rényi random graphs. This peculiarity--which is commonly referred to as community structure--has been observed in seemingly unrelated networks such as the Web, email networks, citation networks, biological networks, etc. The pervasiveness of this phenomenon has led many researchers to believe that such cohesive groups of nodes might represent meaningful entities. For example, in the Web such tightly-knit groups of nodes might represent pages with a common topic, geographical location, etc., while in the neural networks they might represent evolved computational units. The notion of community has emerged in an effort to formalize the empirical observation of the locally dense globally sparse structure of web-like networks. In the broadest sense, a community in a web-like network is defined as a group of nodes that induces a dense subgraph which is sparsely linked with the rest of the network. Due to a wide array of envisioned applications, ranging from crawlers and search engines to network security and network compression, there has recently been a widespread interest in finding efficient community-mining algorithms. In this dissertation, the community structure of web-like networks is investigated by a combination of analytical and computational techniques: First, we consider the problem of modeling the web-like networks. In the recent years, many new random graph models have been proposed to account for some recently discovered properties of web-like networks that distinguish them from the classical random graphs. The vast majority of these random graph models take into account only the addition of new nodes and edges. Yet, several empirical observations indicate that deletion of nodes and edges occurs frequently in web-like networks. Inspired by such observations, we propose and analyze two dynamic random graph models that combine node and edge addition with a uniform and a preferential deletion of nodes, respectively. In both cases, we find that the random graphs generated by such models follow power-law degree distributions (in agreement with the degree distribution of many web-like networks). Second, we analyze the expected density of certain small subgraphs--such as defensive alliances on three and four nodes--in various random graphs models. Our findings show that while in the binomial random graph the expected density of such subgraphs is very close to zero, in some dynamic random graph models it is much larger. These findings converge with our results obtained by computing the number of communities in some Web crawls. Next, we investigate the computational complexity of the community-mining problem under various definitions of community. Assuming the definition of community as a global defensive alliance, or a global offensive alliance we prove--using transformations from the dominating set problem--that finding optimal communities is an NP-complete problem. These and other similar complexity results coupled with the fact that many web-like networks are huge, indicate that it is unlikely that fast, exact sequential algorithms for mining communities may be found. To handle this difficulty we adopt an algorithmic definition of community and a simpler version of the community-mining problem, namely: find the largest community to which a given set of seed nodes belong. We propose several greedy algorithms for this problem: The first proposed algorithm starts out with a set of seed nodes--the initial community--and then repeatedly selects some nodes from community's neighborhood and pulls them in the community. In each step, the algorithm uses clustering coefficient--a parameter that measures the fraction of the neighbors of a node that are neighbors themselves--to decide which nodes from the neighborhood should be pulled in the community. This algorithm has time complexity of order , where denotes the number of nodes visited by the algorithm and is the maximum degree encountered. Thus, assuming a power-law degree distribution this algorithm is expected to run in near-linear time. The proposed algorithm achieved good accuracy when tested on some real and computer-generated networks: The fraction of community nodes classified correctly is generally above 80% and often above 90% . A second algorithm based on a generalized clustering coefficient, where not only the first neighborhood is taken into account but also the second, the third, etc., is also proposed. This algorithm achieves a better accuracy than the first one but also runs slower. Finally, a randomized version of the second algorithm which improves the time complexity without affecting the accuracy significantly, is proposed. The main target application of the proposed algorithms is focused crawling--the selective search for web pages that are relevant to a pre-defined topic.
16

<b>Deep Neural Network Structural Vulnerabilities And Remedial Measures</b>

Yitao Li (9148706) 02 December 2023 (has links)
<p dir="ltr">In the realm of deep learning and neural networks, there has been substantial advancement, but the persistent DNN vulnerability to adversarial attacks has prompted the search for more efficient defense strategies. Unfortunately, this becomes an arms race. Stronger attacks are being develops, while more sophisticated defense strategies are being proposed, which either require modifying the model's structure or incurring significant computational costs during training. The first part of the work makes a significant progress towards breaking this arms race. Let’s consider natural images, where all the feature values are discrete. Our proposed metrics are able to discover all the vulnerabilities surrounding a given natural image. Given sufficient computation resource, we are able to discover all the adversarial examples given one clean natural image, eliminating the need to develop new attacks. For remedial measures, our approach is to introduce a random factor into DNN classification process. Furthermore, our approach can be combined with existing defense strategy, such as adversarial training, to further improve performance.</p>
17

[en] ACCOUNTS TO TELL: EMPLOTMENT AS THE FORMAL PRINCIPLE FOR THE DESIGN OF IMAGE-TEXT NARRATIVES AIMED AT THE PRESENTANTION OF STATISTICS / [pt] CONTAS A CONTAR: A COMPOSIÇÃO DA INTRIGA COMO PRINCÍPIO FORMAL PARA O DESIGN DE NARRATIVAS VERBO-ICÔNICAS DESTINADAS À APRESENTAÇÃO DE ESTATÍSTICAS

MARCOS BALSTER FIORE CORREIA 13 October 2016 (has links)
[pt] A presente tese investiga como o saber narrativo tem sido e como pode vir a ser aplicado no design de apresentações gráfico-visuais de estatísticas. De início, fatores histórico-culturais são discutidos criticamente com o intuito de identificar os motivos pelos quais a narrativa veio a ser indicada, por publicações sobre disseminação de estatísticas e design da informação, como uma solução para problemas de compartilhamento de informações. Em seguida, conhecimentos teóricos sobre a narrativa (sua conceituação, elementos constituintes e estruturação formal) são revistos e, então, aplicados tanto no exame de exemplares de artefatos que veiculam informações em meios gráfico-visuais (relatórios de pesquisa, notícias e infográficos, entre outros) quanto no design experimental de uma narrativa verbo-icônica que apresenta estatísticas provenientes dos censos demográficos do Brasil. Como método, seguem-se procedimentos de pesquisa qualitativa, fundamentados na hermenêutica de Paul Ricoeur e em seu conceito de composição da intriga, tido pelo autor como o princípio formal da configuração narrativa. A tese demonstra como problemas de representação influem no trabalho de retratar um coletivo social por meio de estatísticas e oferece apontamentos para equacionar questões de design e narrativa a fim de que o retrato produzido alcance seu efeito próprio ou érgon: uma compreensão confiável de como essa coletividade é. / [en] This thesis investigates how narrative knowledge has been and how it might be applied in the design of graphic presentations of statistics. Initially, historical and cultural factors are critically discussed in order to identify the reasons why narrative came to be indicated, by literature on dissemination of statistics and information design, as a solution to information sharing issues. Next, theoretical knowledge about narrative (its conceptualization, constituent elements and formal structuration) are reviewed and then applied both in the examination of artefacts that graphically display information (research reports, news and infographics, among others) and in the design of an experimental image-text narrative that shows statistics from the Brazilian demographic censuses. As a method, the thesis follows qualitative research procedures based on Paul Ricoeur s hermeneutics and his concept of emplotment, regarded by the author as the formal principle of narrative configuration. The thesis demonstrates how problems of representation influence the job of portraying a social collective through statistics and provides notes to address design and narrative issues so that the produced picture reaches its proper effect or ergon: a reliable understanding of how this collectivity is.
18

Modelos de regressão simplex: resíduos de Pearson corrigidos e aplicações / Simplex regression models:corrected Pearson residuals and applications

Santos, Lucimary Afonso dos 02 September 2011 (has links)
A distribuição simplex, proposta por Barndor-Nielsen e Jørgensen (1991) é útil para a modelagem de dados contínuos no intervalo (0,1). Nesse trabalho, desenvolve-se o modelo de regressão simplex considerando-se ´ = h(X; ¯), sendo h(:; :) uma função arbitr ária. Denem-se os resíduos para o modelo considerado e obtêm-se correções assintóticas para resíduos do tipo Ri. A primeira correção proposta baseou-se na obtenção da expressão assintótica para a densidade dos resíduos de Pearson, corrigidos até ordem O(n¡1). Esses resíduos foram denidos de forma a terem a mesma distribuição dos resíduos verdadeiros de Pearson. Estudos de simulação mostraram que a distribuição empírica dos resíduos corrigidos pela densidade encontra-se mais próxima da distribuição dos verdadeiros resíduos de Pearson do que para o resíduo não corrigido de Pearson. A segunda correção proposta considera o método dos momentos. Geralmente, E(Ri) e Var(Ri) são diferentes de zero e um, respectivamente, por termos de ordem O(n¡1). Usando-se os resultados de Cox e Snell (1968), obtiveram-se as expressões aproximadas de ordem O(n¡1) para E(Ri) e Var(Ri). Um estudo de simulação está sendo realizado para avaliação da técnica proposta. A técnica desenvolvida no primeiro estudo, foi aplicada a dois conjuntos de dados, sendo o primeiro deles, dados sobre oxidação de amônia, considerando-se preditor linear e o outro sobre porcentagem de massa seca (MS) em grãos de milho, considerando-se preditor linear e não linear. Os resultados obtidos para os dados de oxidação de amônia, indicaram que o modelo com preditor linear está bem ajustado aos dados, considerando-se a exclusão de alguns possíveis pontos inuentes, sendo que a correção proposta, para a densidade dos resíduos, apresenta os melhores resultados. Observando-se os resultados para os dados de massa seca, os melhores resultados foram obtidos, considerando-se um dos modelos com preditor não linear. / The simplex distribution, proposed by Barndor-Nielsen e Jørgensen (1991) is useful for modeling continuous data in the (0,1) interval. In this work, we developed the simplex regression model, considering ´ = h(X; ¯), where h(:; :) is an arbitrary function. We dened the residuals to this model and obtained asymptotic corrections to residuals of the type Ri. The rst correction proposed, was based in obtaining the asymptotic expression for the density of Pearson residuals, corrected to order O(n¡1). These residuals were dened in order to have the same distribution of true Pearson residuals. Simulation studies showed that the empirical distribution of the modied residuals is closer to the distribution of the true Pearson residuals than the unmodied Pearson residuals. The second one, considers the method of moments. Generally E(Ri) and Var(Ri) are dierent from zero and one, respectively, by terms of order O(n¡1). Using the results of Cox and Snell (1968), we obtained the approximate expressions of order O(n¡1) for E(Ri) and Var(Ri). A simulation study is being conducted to evaluate the proposed technique. We applied the techniques in two data sets, the rst one, is a dataset of ammonia oxidation, considering linear predictor and the other one was the percentage of dry matter in maize, considering linear predictor and nonlinear. The results obtained for the oxidation ammonia data indicated that the model considering linear predictor, tted well to the data, if we consider the exclusion of some possible inuential points. The proposed correction for the density of Pearson residuals, showed better results. Observing the results for the dry matter data, the best results were obtained for a model with a specied nonlinear predictor.
19

Descoberta e discernimento de supersimetria versus dimensões extras universais no CERN LHC / Discovery and Discrimination of Supersymmetry versus Universal Extra Dimensions at CERN LHC

Silva, Rafael Marcelino do Carmo 21 September 2015 (has links)
Estimar de forma realista o alcance de descoberta de um experimento de colisão de altas energias, como o realizado no Large Hadron Collider (LHC) do CERN, é uma tarefa complexa, principalmente em vista das técnicas de simulação de eventos e dos métodos de estatística multivariada utilizadas pelas colaborações experimentais na comparação dos dados com as predições teóricas. Descobrir uma nova partícula, contudo, é apenas o primeiro passo na investigação experimental. De modo a estabelecer qual dos eventuais modelos teóricos concorrentes é favorecido pelos dados, torna-se imprescindível o estudo das propriedades desta nova partícula e de suas interações com o restante do espectro. Informações como os números quânticos de spin, conjugação de carga ($C$) e paridade ($P$), podem ser obtidas através do estudo das correlações entre os momentos das partículas produzidas codificadas nas distribuições cinemáticas. O discernimento entre os vários modelos, portanto, passa a ser um problema de combinar todas estas informações de forma eficiente e compará-las aos dados experimentais através de um teste estatístico e decidindo, assim, pela confirmação ou não de um novo sinal e sobre o modelo que melhor explica aqueles dados. No trabalho realizado nesta tese, investigamos o limite do LHC, operando a uma energia de centro-de-massa de 14 TeV, para a descoberta de um modelo supersimétrico (SUSY) simplificado e de seu discernimento em relação a um modelo de dimensões extras universais mínimas (MUED), usando eventos de produção de novas partículas coloridas decaindo, através de cadeias curtas, em jatos e missing energy. Nossa abordagem avança em diversos aspectos em comparação a fenomenologias mais simplificadas: utilizando uma análise estatística multivariada, levando em conta incertezas sistemáticas nas normalizações das seções de choque e no formato das distribuições, empregando técnicas de identificação de jatos de quarks e glúons para uma melhor separação dos backgrounds do Modelo padrão (MP), escaneando e otimizando os cortes retangulares, simulando eventos de forma cuidadosa e com correções de ordem superior da cromodinâmica quântica (QCD). Eventos de SUSY e MUED foram simulados para 150 diferentes espectros de massa, ainda não excluídos pelo LHC, e estimamos o potencial de descoberta e de discernimento SUSY versus MUED no plano de massas de squarks e gluinos utilizando as técnicas acima mencionadas. Mostramos, em primeiro lugar, que mesmo de forma simplificada, inserir incertezas sistemáticas é essencial para uma estimativa mais realista do potencial do acelerador, principalmente no que diz respeito ao aumento de luminosidade integrada. Para incertezas nas normalizações da ordem de 20%, o ganho no potencial de busca torna-se mais limitado. Por exemplo, passando de 100 a 3000 fb$^{-1}$, o alcance na massa dos squarks aumenta de 2.8 para ~ 3.1$ TeV, ao passo que, sem levar em conta estas incertezas, a estimativa é mais otimista, indo de 3.0 a ~ 3.5 TeV para as mesmas luminosidades. Performance similar é observada no discernimento SUSY versus MUED, onde é possível obter uma significância de $5\\sigma$ para massas de squarks de até ~ 2.7 TeV e gluinos ~ 5 TeV, mantendo-se as incertezas sistemáticas a um nível menor do que 10% aproximadamente. De forma geral, concluímos que um modelo supersimétrico simplificado, como o estudado aqui, pode ser descoberto e confirmado (em relação a um dos seus mais populares concorrentes, MUED) para um espectro com squarks, gluinos e neutralinos de aproximadamente 2.5, 5.0 e 0.3 TeV, respectivamente, se as incertezas sistemáticas puderem ser controladas a um nível de 10 % ou menos, após 3 ab$^{-1}$ de luminosidade integrada. / The problem of estimating, in a realistic way, the reach of an experiment in high energy physics, such as the CERN Large Hadron Collider (LHC), is a difficult task. Specially due to the simulations techniques and the multivariate statistics for data and theory comparisons, used by experimental collaborations. The discovery of a new particle is just the first step in the experimental exploration. The properties of this particle, like parity, spin and charge are conditions to assert which physics model is favored by the collected data. It is possible to measure these properties with the analyses of the particle momentum correlations through the kinematical distributions. The discriminations among different models turns into a problem of combining all this informations, in a efficient way, and compare with experimental data through a statistical test, and choosing for the confirmation or exclusion of a signal and which model best describes the data. In this work, we investigate the limits of the LHC, working in a center of mass energy of 14 TeV, for the discovery of a simplified model of supersymmetry (SUSY) and the discrimination with a model of minimal universal extra dimensions (MUED), using productions of heavy colored particles decaying, through short decays chains, in jets and missing energy. Our approach progresses in different aspects compared with simplified phenomenological analyses: we used a multivariate statistical analysis, considered systematical uncertainties in the rate and shape of distributions, implemented techniques of quarks and gluons jet tagging identification for a good separation between signal and backgrounds, scanning for the best rectangular cuts and simulating events in a careful way with 1-loop corrections from quantum chromodynamics. Our events were simulated for 150 different mass spectrums, not excluded by the LHC, and we estimate the potencial for discovery and discrimination of SUSY versus MUED in a squarks-gluinos mass plane, using the techniques mentioned above. We proved, in first place, that even in a simplified way, inserting systematical uncertainties it\'s essential for an estimative more realistic of the collider\'s reach, mainly with the increasing of integrated luminosity. For systematical rate uncertainties in the distribution of 20%, the gain in the discovery potencial is very limited. For example, increasing from 100 to 3000 fb$^{-1}$, the reach in the squark mass increase from ~ 2.8 to 3.1 TeV. On the other hand, without systematical uncertainties in rate distributions, the reach is more optimistic, from 3.0 TeV to ~ 3.5 TeV, for the same luminosities. Similar performance was observed in the discrimination of Susy versus MUED, where it\'s possible to obtain significance of $5\\sigma$ for squark masses up to ~ 2.7$ TeV and gluinos of ~ 5 TeV, keeping systematical uncertainties at a level about 10%. In general, we conclude that a supersymmetryc model, like we studied here, can be discovered and confirmed (compared to one of its more popular competitors, MUED) for a mass spectrum of squarks, gluinos and neutalinos about 2.5, 5.0 and 0.3 TeV, respectively, if it\'s possible to control the systematical uncertainties at a level about 10%, after 3 ab$^{-1}$ of integrated luminosity.
20

Responsabilidade social empresarial e desempenho financeiro das empresas: evidências do Brasil / Social corporate responsibility and financial performance: evidence from Brazil

Kitahara, José Renato 27 August 2012 (has links)
Enquanto a Administração de Empresas evoluiu muito no último século e trouxe vasto ferramental aos gestores de empresas, o tema da Responsabilidade Social Empresarial (RSE) não acompanhou essa evolução e ainda não dispõe de conceitos sólidos e ferramental de apoio aos gestores. Isso justifica o presente estudo, que objetiva identificar, empiricamente, o comportamento das empresas que operam no Brasil, referente aos seus investimentos em ações de RSE e suas relações com o Desempenho Financeiro (DF), com base na Receita Líquida (RL) e no Resultado Operacional (RO). A partir de uma amostra de 2064 Balanços Sociais padrão IBASE (BS) de 378 empresas, no período entre 1996 e 2010, o estudo buscou resposta a oito perguntas de pesquisa e encontrou comportamentos estatisticamente significativos em todas elas. Os resultados indicam que existe relacionamento direto entre a RL e os investimentos em RSE. Existe também um relacionamento direto entre o RO-Positivo e os investimentos em RSE. Nos casos em que o RO é negativo, o relacionamento com os investimentos em RSE está associado ao valor absoluto do RO-Negativo, o que pode ser uma relação dependente da RL ou do porte da empresa. O setor de atuação das empresas é um fator que segmenta comportamentos característicos das empresas e nem todas as turbulências conjunturais nacionais e internacionais impactam e influenciam as decisões de investimentos em RSE e o DF das empresas de forma semelhante. Os modelos matemáticos que relacionam a RL com os investimentos em RSE têm melhor capacidade explicativa que os modelos correspondentes que relacionam o RO a esses mesmos investimentos, sendo que o setor de atuação é um diferenciador significativo. Não foram encontradas diferenças significativas na qualidade explicativa dos modelos matemáticos que consideraram o DF do ano-base e do ano anterior em relação às decisões de investimentos em RSE e, mais, a composição do portfólio de investimento em RSE varia em função do setor e do ano de publicação dos BSs. / While Business Management was much developed during the last hundred years and accumulated a lot of management tools to managers of companies, the theme of Corporate Social Responsibility (CSR) has only received tangential attention and does not yet have solid concepts and tools to support managers. This lack of similar evolution justifies the present study, which aims to identify, empirically, the behavior of companies operating in Brazil, related to their investments in activities of CSR and its relations with Financial Performance (FP), based on Net Income (NI) and Operational Results (OR). From a sample of 2064 IBASE Social Accounting Balances (SAB) of 378 companies between 1996 and 2010, the study sought to answer eight research questions and found statistically significant behavior in all of them. The results indicate that there is direct relationship between the NI and investments in CSR. There is also a direct relationship between the ORPositive and investments in CSR. When OR is negative, the relationship between CSR investment is associated with the absolute value of OR-negative, which may be a dependent relationship of NI or size of the company. The business sector is a factor that segments the characteristic behaviors of companies, and not all national and international economic turmoil impact and influence investment decisions in CSR and FP of firms with the same magnitude. The mathematical models that relate the NI with investments in CSR have better explanatory power than models that relate the OR corresponding to such investments, and the sector of activity is a significant differentiator. There were no significant differences in the explanatory quality of mathematical models that considered the FP for the base year and the year before, in relation to decisions on investments in CSR,, and the composition of the investment in CSR portfolio varies by sector and year of publication of SABs.

Page generated in 0.0529 seconds