71 |
On the Design of Methods to Estimate Network CharacteristicsRibeiro, Bruno F. 01 May 2010 (has links)
Social and computer networks permeate our lives. Large networks, such as the Internet, the World Wide Web (WWW), AND wireless smartphones, have indisputable economic and social importance. These networks have non-trivial topological features, i.e., features that do not occur in simple networks such as lattices or random networks. Estimating characteristics of these networks from incomplete (sampled) data is a challenging task. This thesis provides two frameworks within which common measurement tasks are analyzed and new, principled, measurement methods are designed. The first framework focuses on sampling directly observable network characteristics. This framework is applied to design a novel multidimensional random walk to efficiently sample loosely connected networks. The second framework focuses on the design of measurement methods to estimate indirectly observable network characteristics. This framework is applied to design two new, principled, estimators of flow size distributions over Internet routers using (1) randomly sampled IP packets and (2) a data stream algorithm.
|
72 |
Evaluating and optimizing the performance of real-time feedback-driven single particle tracking microscopes through the lens of information and optimal controlVickers, Nicholas Andrew 17 January 2023 (has links)
Single particle tracking has become a ubiquitous class of tools in the study of biology at the molecular level. While the broad adoption of these techniques has yielded significant advances, it has also revealed the limitations of the methods. Most notable among these is that traditional single particle tracking is limited to imaging the particle at low temporal resolutions and small axial ranges. This restricts applications to slow processes confined to a plane. Biological processes in the cell, however, happen at multiple time scales and length scales. Real-time feedback-driven single particle
tracking microscopes have emerged as one group of methods that can overcome these limitations. However, the development of these techniques has been ad-hoc and their performance has not been consistently analyzed in a way that enables comparisons across techniques, leading to incremental improvements on existing sets of tools, with no sense of fit or optimality with respect to SPT experimental requirements. This thesis addresses these challenges through three key questions : 1) What performance metrics are necessary to compare different techniques, allowing for easy selection
of the method that best fits a particular application? 2) What is a procedure to design single particle tracking microscopes for the best performance?, and 3) How does one controllably and repeatably experimentally test single particle tracking
performance on specific microscopes?. These questions are tackled in four thrusts: 1) a comprehensive review of real-time feedback-driven single particle tracking spectroscopy, 2) the creation of an optimization framework using Fisher information, 3) the design of a real-time feedback-driven single particle tracking microscope utilizing extremum
seeking control, and 4) the development of synthetic motion, a protocol that provides biologically relevant known ground-truth particle motion to test single particle tracking microscopes and data analysis algorithms. The comprehensive review yields a unified view of single particle tracking microscopes and highlights two clear challenges, the photon budget and the control temporal budget, that work to limit the two key performance metrics, tracking duration and Fisher information. Fisher information provides a common framework to understand the elements of real-time feedback-driven single particle tracking microscopes, and the corresponding information optimization framework is a method to optimally design these microscopes towards an experimental aim. The thesis then expands an existing tracking algorithm to handle multiple
particles through a multi-layer control architecture, and introduces REACTMIN, a new approach that reactively scans a minimum of light to overcome both the photon budget and the control temporal budget. This enables tracking durations up to hours, position localization down to a few nanometers, with temporal resolutions greater than 1 kHz. Finally, synthetic motion provides a repeatable and programmable method to test single particle tracking microscopes and algorithms with a known ground truth experiment. The performance of this method is analyzed in the presence of common actuator limitations. / 2024-01-16T00:00:00Z
|
73 |
The Tully-Fisher Relation, its residuals, and a comparison to theoretical predictions for a broadly selected sample of galaxiesPizagno, James Lawrence, II 13 September 2006 (has links)
No description available.
|
74 |
Time to Coalescence for a Class of Nonuniform Allocation ProcessesMcSweeney, John Kingen 27 August 2009 (has links)
No description available.
|
75 |
Differences in the Experience of the 1918-1919 Influenza Pandemic at Norway House and Fisher River, Manitoba / 1918-1919 Influenza Pandemic at Norway House and Fisher River, ManitobaSlonim, Karen 09 1900 (has links)
This thesis discusses the impact of the 1918 influenza pandemic at Norway House and Fisher River, Manitoba. Despite sharing similar overall mortality rates during the pandemic, the two communities showed substantial differences when the distribution of deaths are examined at the family level. Reconstituted family data show that deaths were more tightly clustered within a small number of families at Norway House, while at Fisher River they were distributed amongst more families. Adults perished more often at Norway House than Fisher River. Historical documentation suggests, moreover, that the day-to-day functioning of Norway House was more severely disrupted than was the case for Fisher River. I argue that the differences in the family distribution of mortality at the two communities is linked to differences in social organization and, specifically, to the presence or absence of the Hudson's Bay Company. To test this hypotheses the data are examined using aggregate techniques, reconstituted family data and a technique outlined in Scott and Duncan's 2001 work. / Thesis / Master of Arts (MA)
|
76 |
Towards the Safety and Robustness of Deep ModelsKarim, Md Nazmul 01 January 2023 (has links) (PDF)
The primary focus of this doctoral dissertation is to investigate the safety and robustness of deep models. Our objective is to thoroughly analyze and introduce innovative methodologies for cultivating robust representations under diverse circumstances. Deep neural networks (DNNs) have emerged as fundamental components in recent advancements across various tasks, including image recognition, semantic segmentation, and object detection. Representation learning stands as a pivotal element in the efficacy of DNNs, involving the extraction of significant features from data through mechanisms like convolutional neural networks (CNNs) applied to image data. In real-world applications, ensuring the robustness of these features against various adversarial conditions is imperative, thus emphasizing robust representation learning. Through the acquisition of robust representations, DNNs can enhance their ability to generalize to new data, mitigate the impact of label noise and domain shifts, and bolster their resilience against external threats, such as backdoor attacks. Consequently, this dissertation explores the implications of robust representation learning in three principal areas: i) Backdoor Attack, ii) Backdoor Defense, and iii) Noisy Labels.
First, we study the backdoor attack creation and detection from different perspectives. Backdoor attack addresses AI safety and robustness issues where an adversary can insert malicious behavior into a DNN by altering the training data. Second, we aim to remove the backdoor from DNN using two different types of defense techniques: i) training-time defense and ii) test-time defense. training-time defense prevents the model from learning the backdoor during model training whereas test-time defense tries to purify the backdoor model after the backdoor has already been inserted. Third, we explore the direction of noisy label learning (NLL) from two perspectives: a) offline NLL and b) online continual NLL. The representation learning under noisy labels gets severely impacted due to the memorization of those noisy labels, which leads to poor generalization. We perform uniform sampling and contrastive learning-based representation learning. We also test the algorithm efficiency in an online continual learning setup. Furthermore, we show the transfer and adaptation of learned representations in one domain to another domain, e.g. source free domain adaptation (SFDA). We study the impact of noisy labels under SFDA settings and propose a novel algorithm that produces state-of-the-art (SOTA) performance.
|
77 |
Indexing Large Permutations in HardwareOdom, Jacob Henry 07 June 2019 (has links)
Generating unbiased permutations at run time has traditionally been accomplished through application specific optimized combinational logic and has been limited to very small permutations. For generating unbiased permutations of any larger size, variations of the memory dependent Fisher-Yates algorithm are known to be an optimal solution in software and have been relied on as a hardware solution even to this day. However, in hardware, this thesis proves Fisher-Yates to be a suboptimal solution. This thesis will show variations of Fisher-Yates to be suboptimal by proposing an alternate method that does not rely on memory and outperforms Fisher-Yates based permutation generators, while still able to scale to very large sized permutations. This thesis also proves that this proposed method is unbiased and requires a minimal input. Lastly, this thesis demonstrates a means to scale the proposed method to any sized permutations and also to produce optimal partial permutations. / Master of Science / In computing, some applications need the ability to shuffle or rearrange items based on run time information during their normal operations. A similar task is a partial shuffle where only an information dependent selection of the total items is returned in a shuffled order. Initially, there may be the assumption that these are trivial tasks. However, the applications that rely on this ability are typically related to security which requires repeatable, unbiased operations. These requirements quickly turn seemingly simple tasks to complex. Worse, often they are done incorrectly and only appear to meet these requirements, which has disastrous implications for security. A current and dominating method to shuffle items that meets these requirements was developed over fifty years ago and is based on an even older algorithm refer to as Fisher-Yates, after its original authors. Fisher-Yates based methods shuffle items in memory, which is seen as advantageous in software but only serves as a disadvantage in hardware since memory access is significantly slower than other operations. Additionally, when performing a partial shuffle, Fisher-Yates methods require the same resources as when performing a complete shuffle. This is due to the fact that, with Fisher-Yates methods, each element in a shuffle is dependent on all of the other elements. Alternate methods to meet these requirements are known but are only able to shuffle a very small number of items before they become too slow for practical use. To combat the disadvantages current methods of shuffling possess, this thesis proposes an alternate approach to performing shuffles. This alternate approach meets the previously stated requirements while outperforming current methods. This alternate approach is also able to be extended to shuffling any number of items while maintaining a useable level of performance. Further, unlike current popular shuffling methods, the proposed method has no inter-item dependency and thus offers great advantages over current popular methods with partial shuffles.
|
78 |
Modélisation Bayésienne des mesures de vitesses particulières dans le projet CosmicFlows / Bayesian modeling of peculiar velocity measurements for the CosmicFlows collaborationGraziani, Romain 14 September 2018 (has links)
Le modèle de concordance de la cosmologie moderne repose entre autre sur l'existence de matière dite « noire », matière qui n'intéragirait que gravitationellement et qui ne peut donc pas être observée directement. Les vitesses particulières des galaxies, puisqu'elles tracent le champ de gravité, sont des sondes non-biaisées de la matière dans l'Univers. Ainsi, l'étude de ces vitesses particulières permet non seulement de cartographier l'Univers proche (matière noire comprise), mais aussi de tester le modèle ΛCDM via la vitesse d'expansion de l'Univers et le taux de formation des structures. Observationnellement, la mesure de la vitesse particulière d'une galaxie se fait à partir de la mesure de sa distance, mesure très imprécise pour les données extragalactiques. Mal modélisée, cette incertitude conduit à des analyses biaisées des vitesses particulières, et ainsi détériore la qualité de cette sonde cosmologique. Dans ce contexte, cette thèse s'intéresse aux erreurs systématiques statistiques des analyses de vitesses particulières. D'abord en étudiant puis modélisant ces erreurs systématiques. Ensuite en proposant de nouveaux modèles pour les prendre en compte. En particulier, y est développé un modèle permettant, à partir des mesures de la vitesse de rotation des galaxies, de reconstruire le champ de densité de l'Univers Local. Ce modèle s'appuie sur l'analyse des corrélations de vitesse données par le modèle de concordance, et la modélisation de la relation de Tully-Fisher, qui lie la vitesse de rotation des galaxies à leur luminosté. Le modèle développé est appliqué au catalogue de distances extragalactiques CosmicFlows-3, permettant ainsi une nouvelle cartographie de l'Univers proche et de sa cinématique / The cosmological concordance model relies on the existence of a ≪ dark ≫ matter which hypothetically only interacts through gravity. Hence, the dark matter could not be observed directly with standard techniques. Since they directly probe gravity, peculiar velocities of galaxies are an unbiased tool to probe the matter content of the Universe. They can trace the total matter field and constrain the Local Universe’s expansion rate and growth of structures. The peculiar velocity of a galaxy can only be measured from its distance, which determination is very inaccurate for distant objects. If not correctly modeled, these uncertainties can lead to biaised analyses and poor constraints on the ΛCDM model. Within this context, this PhD studies the systematic and statistical errors of peculiar velocity analyses. First by investigating and modeling these errors. Then by building Bayesian models to include them. In particular, a model of the Local Universe’s velocity field from the observations of the rotational velocity of galaxies is presented. This model relies on the ΛCDM’s peculiar velocity correlations and on a Tully-Fisher relation model. The model has then been applied to the CosmicFlows-3 catalog of distances and provides a new kinematic map of the Local Universe
|
79 |
Entropia e informação de sistemas quânticos amortecidos / Entropy and information of quantum damped systemsLima Júnior, Vanderley Aguiar de January 2014 (has links)
LIMA JÚNIOR, Vanderley Aguiar de. Entropia e informação de sistemas quânticos amortecidos. 2014. 65 f. Dissertação (Mestrado em Física) - Programa de Pós-Graduação em Física, Departamento de Física, Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2014. / Submitted by Edvander Pires (edvanderpires@gmail.com) on 2015-04-09T19:28:55Z
No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Approved for entry into archive by Edvander Pires(edvanderpires@gmail.com) on 2015-04-10T20:50:41Z (GMT) No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Made available in DSpace on 2015-04-10T20:50:41Z (GMT). No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5)
Previous issue date: 2014
|
80 |
Dinâmica de gliomas e possíveis tratamentosAlvarez, Robinson Franco January 2016 (has links)
Orientador: Prof. Dr. Roberto Venegeroles Nascimento / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Física, 2016. / Neste trabalho se estudaram aspectos básicos relacionados com a dinâmica de células cancerígenas do tipo B-Linfoma BCL1 e de gliomas fazendo ênfases neste último caso. O trabalho se iniciou revisando alguns modelos populacionais do câncer inspirados nos trabalhos de Lotka e Volterra o qual oferecem uma descrição muito simples da interação entre o câncer (presa) e o sistema imunológico (caçador). Posteriormente é revisado um modelo global espaço-temporal baseado nas equações de Fisher-Kolmogorov-Petrovsky- Piskounov (FKPP) [1] o qual permitiu aclarar a dicotomia entre proliferação e motilidade associada fortemente ao crescimento tumoral e à invasividade, respectivamente, das células cancerosas. A partir do modelo FKPP também se fez um estudo computacional mais detalhado aplicando diferentes protocolos de tratamentos para analisar seus efeitos sobre o crescimento e desenvolvimento de gliomas. O estudo sugere que um tratamento com maior tempo entre cada dose poderia ser mais ótimo do que um tratamento mais agressivo. Propõe-se também um modelo populacional local do câncer em que se tem em conta o caráter policlonal das células cancerígenas e as interações destas com o sistema imunológico natural e especifico. Neste último modelo se consegui apreciar fenômenos como dormancy state (estado de latência) e escape phase (fase de escape) para valores dos parâmetros correspondentes ao câncer de tipo B-Linfoma BCL1 [2] o qual explica os fenômenos de imunoedição e escape da imunovigilância [3] o qual poderia permitir propor novos protocolos de tratamentos mais apropriados.Depois se fez uma reparametrização do modelo baseado em algumas características mais próprias das células tumorais do tipo glioma e assumindo presença de imunodeficiência com o que se obtém coexistências oscilatórias periódicas tanto da população tumoral assim como das células do sistema imunológico o qual poderia explicar os casos clínicos de remissão e posterior reincidência tumoral. Finalmente se obtiveram baixo certas condições, uma dinâmica caótica na população tumoral o qual poderia explicar os casos clínicos em que se apresentam falta de controlabilidade da doença sobre tudo em pessoas idosas ou com algum quadro clinico que envolve alguma deficiência no funcionamento normal do sistema imunológico. / In this work we studied basic aspects of the dynamics of cancer cell type B-Lymphoma BCL1 and gliomas making strong emphasis in the latter case. We start reviewing some
population models of cancer inspired in the work¿s of Lotka and Volterra, which offers a very simple description of the interaction between cancer (prey) and the immune system (Hunter). Subsequently revise a global model space-time based on the equations of Fisher-Kolmogorov-Petrovsky-Piskounov (FKPP) [1] which allowed elucidating the
dichotomy between proliferation and strongly associated motility to tumor growth and invasiveness, respectively, of cancer cells. From the FKPP model also made a more
detailed computer study applying different treatment protocols to analyze their effects on the growth and development of gliomas. The study suggests that treatment with
longer time between each dose could be more optimal than a more aggressive treatment. Is studied also a local population cancer model that takes into account the polyclonal
nature of cancer cells, and these interactions with the natural and specific immune system. In the latter model is able to appreciate phenomena as dormancy state and
escape phase for values of parameters corresponding to lymphoma cancer BCL1 [2] which explains the phenomena of immunoediting and tumor escape immuno-surveillance [3]
allowing elucidating treatments protocols more appropriate. A re-parameterization was made based on some features of tumor cells glioma type and assuming presence
of immunodeficiency with that obtained coexistences periodic oscillatory both tumor populations as well as the immune system cells which could explain the clinical cases of remission and subsequent tumor recurrence. Finally obtained under certain conditions, a chaotic dynamics in tumor population which could explain the clinical cases that present lack of controllability of the disease on all in elderly or with some clinical picture involving some deficiency in the normal functioning of the immune system.
|
Page generated in 0.0345 seconds