• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 50
  • 34
  • 15
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 308
  • 77
  • 26
  • 26
  • 25
  • 23
  • 20
  • 19
  • 18
  • 18
  • 18
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Evaluating and optimizing the performance of real-time feedback-driven single particle tracking microscopes through the lens of information and optimal control

Vickers, Nicholas Andrew 17 January 2023 (has links)
Single particle tracking has become a ubiquitous class of tools in the study of biology at the molecular level. While the broad adoption of these techniques has yielded significant advances, it has also revealed the limitations of the methods. Most notable among these is that traditional single particle tracking is limited to imaging the particle at low temporal resolutions and small axial ranges. This restricts applications to slow processes confined to a plane. Biological processes in the cell, however, happen at multiple time scales and length scales. Real-time feedback-driven single particle tracking microscopes have emerged as one group of methods that can overcome these limitations. However, the development of these techniques has been ad-hoc and their performance has not been consistently analyzed in a way that enables comparisons across techniques, leading to incremental improvements on existing sets of tools, with no sense of fit or optimality with respect to SPT experimental requirements. This thesis addresses these challenges through three key questions : 1) What performance metrics are necessary to compare different techniques, allowing for easy selection of the method that best fits a particular application? 2) What is a procedure to design single particle tracking microscopes for the best performance?, and 3) How does one controllably and repeatably experimentally test single particle tracking performance on specific microscopes?. These questions are tackled in four thrusts: 1) a comprehensive review of real-time feedback-driven single particle tracking spectroscopy, 2) the creation of an optimization framework using Fisher information, 3) the design of a real-time feedback-driven single particle tracking microscope utilizing extremum seeking control, and 4) the development of synthetic motion, a protocol that provides biologically relevant known ground-truth particle motion to test single particle tracking microscopes and data analysis algorithms. The comprehensive review yields a unified view of single particle tracking microscopes and highlights two clear challenges, the photon budget and the control temporal budget, that work to limit the two key performance metrics, tracking duration and Fisher information. Fisher information provides a common framework to understand the elements of real-time feedback-driven single particle tracking microscopes, and the corresponding information optimization framework is a method to optimally design these microscopes towards an experimental aim. The thesis then expands an existing tracking algorithm to handle multiple particles through a multi-layer control architecture, and introduces REACTMIN, a new approach that reactively scans a minimum of light to overcome both the photon budget and the control temporal budget. This enables tracking durations up to hours, position localization down to a few nanometers, with temporal resolutions greater than 1 kHz. Finally, synthetic motion provides a repeatable and programmable method to test single particle tracking microscopes and algorithms with a known ground truth experiment. The performance of this method is analyzed in the presence of common actuator limitations. / 2024-01-16T00:00:00Z
72

The Tully-Fisher Relation, its residuals, and a comparison to theoretical predictions for a broadly selected sample of galaxies

Pizagno, James Lawrence, II 13 September 2006 (has links)
No description available.
73

Time to Coalescence for a Class of Nonuniform Allocation Processes

McSweeney, John Kingen 27 August 2009 (has links)
No description available.
74

Differences in the Experience of the 1918-1919 Influenza Pandemic at Norway House and Fisher River, Manitoba / 1918-1919 Influenza Pandemic at Norway House and Fisher River, Manitoba

Slonim, Karen 09 1900 (has links)
This thesis discusses the impact of the 1918 influenza pandemic at Norway House and Fisher River, Manitoba. Despite sharing similar overall mortality rates during the pandemic, the two communities showed substantial differences when the distribution of deaths are examined at the family level. Reconstituted family data show that deaths were more tightly clustered within a small number of families at Norway House, while at Fisher River they were distributed amongst more families. Adults perished more often at Norway House than Fisher River. Historical documentation suggests, moreover, that the day-to-day functioning of Norway House was more severely disrupted than was the case for Fisher River. I argue that the differences in the family distribution of mortality at the two communities is linked to differences in social organization and, specifically, to the presence or absence of the Hudson's Bay Company. To test this hypotheses the data are examined using aggregate techniques, reconstituted family data and a technique outlined in Scott and Duncan's 2001 work. / Thesis / Master of Arts (MA)
75

Indexing Large Permutations in Hardware

Odom, Jacob Henry 07 June 2019 (has links)
Generating unbiased permutations at run time has traditionally been accomplished through application specific optimized combinational logic and has been limited to very small permutations. For generating unbiased permutations of any larger size, variations of the memory dependent Fisher-Yates algorithm are known to be an optimal solution in software and have been relied on as a hardware solution even to this day. However, in hardware, this thesis proves Fisher-Yates to be a suboptimal solution. This thesis will show variations of Fisher-Yates to be suboptimal by proposing an alternate method that does not rely on memory and outperforms Fisher-Yates based permutation generators, while still able to scale to very large sized permutations. This thesis also proves that this proposed method is unbiased and requires a minimal input. Lastly, this thesis demonstrates a means to scale the proposed method to any sized permutations and also to produce optimal partial permutations. / Master of Science / In computing, some applications need the ability to shuffle or rearrange items based on run time information during their normal operations. A similar task is a partial shuffle where only an information dependent selection of the total items is returned in a shuffled order. Initially, there may be the assumption that these are trivial tasks. However, the applications that rely on this ability are typically related to security which requires repeatable, unbiased operations. These requirements quickly turn seemingly simple tasks to complex. Worse, often they are done incorrectly and only appear to meet these requirements, which has disastrous implications for security. A current and dominating method to shuffle items that meets these requirements was developed over fifty years ago and is based on an even older algorithm refer to as Fisher-Yates, after its original authors. Fisher-Yates based methods shuffle items in memory, which is seen as advantageous in software but only serves as a disadvantage in hardware since memory access is significantly slower than other operations. Additionally, when performing a partial shuffle, Fisher-Yates methods require the same resources as when performing a complete shuffle. This is due to the fact that, with Fisher-Yates methods, each element in a shuffle is dependent on all of the other elements. Alternate methods to meet these requirements are known but are only able to shuffle a very small number of items before they become too slow for practical use. To combat the disadvantages current methods of shuffling possess, this thesis proposes an alternate approach to performing shuffles. This alternate approach meets the previously stated requirements while outperforming current methods. This alternate approach is also able to be extended to shuffling any number of items while maintaining a useable level of performance. Further, unlike current popular shuffling methods, the proposed method has no inter-item dependency and thus offers great advantages over current popular methods with partial shuffles.
76

Towards the Safety and Robustness of Deep Models

Karim, Md Nazmul 01 January 2023 (has links) (PDF)
The primary focus of this doctoral dissertation is to investigate the safety and robustness of deep models. Our objective is to thoroughly analyze and introduce innovative methodologies for cultivating robust representations under diverse circumstances. Deep neural networks (DNNs) have emerged as fundamental components in recent advancements across various tasks, including image recognition, semantic segmentation, and object detection. Representation learning stands as a pivotal element in the efficacy of DNNs, involving the extraction of significant features from data through mechanisms like convolutional neural networks (CNNs) applied to image data. In real-world applications, ensuring the robustness of these features against various adversarial conditions is imperative, thus emphasizing robust representation learning. Through the acquisition of robust representations, DNNs can enhance their ability to generalize to new data, mitigate the impact of label noise and domain shifts, and bolster their resilience against external threats, such as backdoor attacks. Consequently, this dissertation explores the implications of robust representation learning in three principal areas: i) Backdoor Attack, ii) Backdoor Defense, and iii) Noisy Labels. First, we study the backdoor attack creation and detection from different perspectives. Backdoor attack addresses AI safety and robustness issues where an adversary can insert malicious behavior into a DNN by altering the training data. Second, we aim to remove the backdoor from DNN using two different types of defense techniques: i) training-time defense and ii) test-time defense. training-time defense prevents the model from learning the backdoor during model training whereas test-time defense tries to purify the backdoor model after the backdoor has already been inserted. Third, we explore the direction of noisy label learning (NLL) from two perspectives: a) offline NLL and b) online continual NLL. The representation learning under noisy labels gets severely impacted due to the memorization of those noisy labels, which leads to poor generalization. We perform uniform sampling and contrastive learning-based representation learning. We also test the algorithm efficiency in an online continual learning setup. Furthermore, we show the transfer and adaptation of learned representations in one domain to another domain, e.g. source free domain adaptation (SFDA). We study the impact of noisy labels under SFDA settings and propose a novel algorithm that produces state-of-the-art (SOTA) performance.
77

Modélisation Bayésienne des mesures de vitesses particulières dans le projet CosmicFlows / Bayesian modeling of peculiar velocity measurements for the CosmicFlows collaboration

Graziani, Romain 14 September 2018 (has links)
Le modèle de concordance de la cosmologie moderne repose entre autre sur l'existence de matière dite « noire », matière qui n'intéragirait que gravitationellement et qui ne peut donc pas être observée directement. Les vitesses particulières des galaxies, puisqu'elles tracent le champ de gravité, sont des sondes non-biaisées de la matière dans l'Univers. Ainsi, l'étude de ces vitesses particulières permet non seulement de cartographier l'Univers proche (matière noire comprise), mais aussi de tester le modèle ΛCDM via la vitesse d'expansion de l'Univers et le taux de formation des structures. Observationnellement, la mesure de la vitesse particulière d'une galaxie se fait à partir de la mesure de sa distance, mesure très imprécise pour les données extragalactiques. Mal modélisée, cette incertitude conduit à des analyses biaisées des vitesses particulières, et ainsi détériore la qualité de cette sonde cosmologique. Dans ce contexte, cette thèse s'intéresse aux erreurs systématiques statistiques des analyses de vitesses particulières. D'abord en étudiant puis modélisant ces erreurs systématiques. Ensuite en proposant de nouveaux modèles pour les prendre en compte. En particulier, y est développé un modèle permettant, à partir des mesures de la vitesse de rotation des galaxies, de reconstruire le champ de densité de l'Univers Local. Ce modèle s'appuie sur l'analyse des corrélations de vitesse données par le modèle de concordance, et la modélisation de la relation de Tully-Fisher, qui lie la vitesse de rotation des galaxies à leur luminosté. Le modèle développé est appliqué au catalogue de distances extragalactiques CosmicFlows-3, permettant ainsi une nouvelle cartographie de l'Univers proche et de sa cinématique / The cosmological concordance model relies on the existence of a ≪ dark ≫ matter which hypothetically only interacts through gravity. Hence, the dark matter could not be observed directly with standard techniques. Since they directly probe gravity, peculiar velocities of galaxies are an unbiased tool to probe the matter content of the Universe. They can trace the total matter field and constrain the Local Universe’s expansion rate and growth of structures. The peculiar velocity of a galaxy can only be measured from its distance, which determination is very inaccurate for distant objects. If not correctly modeled, these uncertainties can lead to biaised analyses and poor constraints on the ΛCDM model. Within this context, this PhD studies the systematic and statistical errors of peculiar velocity analyses. First by investigating and modeling these errors. Then by building Bayesian models to include them. In particular, a model of the Local Universe’s velocity field from the observations of the rotational velocity of galaxies is presented. This model relies on the ΛCDM’s peculiar velocity correlations and on a Tully-Fisher relation model. The model has then been applied to the CosmicFlows-3 catalog of distances and provides a new kinematic map of the Local Universe
78

Entropia e informação de sistemas quânticos amortecidos / Entropy and information of quantum damped systems

Lima Júnior, Vanderley Aguiar de January 2014 (has links)
LIMA JÚNIOR, Vanderley Aguiar de. Entropia e informação de sistemas quânticos amortecidos. 2014. 65 f. Dissertação (Mestrado em Física) - Programa de Pós-Graduação em Física, Departamento de Física, Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2014. / Submitted by Edvander Pires (edvanderpires@gmail.com) on 2015-04-09T19:28:55Z No. of bitstreams: 1 2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Approved for entry into archive by Edvander Pires(edvanderpires@gmail.com) on 2015-04-10T20:50:41Z (GMT) No. of bitstreams: 1 2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Made available in DSpace on 2015-04-10T20:50:41Z (GMT). No. of bitstreams: 1 2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) Previous issue date: 2014
79

Dinâmica de gliomas e possíveis tratamentos

Alvarez, Robinson Franco January 2016 (has links)
Orientador: Prof. Dr. Roberto Venegeroles Nascimento / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Física, 2016. / Neste trabalho se estudaram aspectos básicos relacionados com a dinâmica de células cancerígenas do tipo B-Linfoma BCL1 e de gliomas fazendo ênfases neste último caso. O trabalho se iniciou revisando alguns modelos populacionais do câncer inspirados nos trabalhos de Lotka e Volterra o qual oferecem uma descrição muito simples da interação entre o câncer (presa) e o sistema imunológico (caçador). Posteriormente é revisado um modelo global espaço-temporal baseado nas equações de Fisher-Kolmogorov-Petrovsky- Piskounov (FKPP) [1] o qual permitiu aclarar a dicotomia entre proliferação e motilidade associada fortemente ao crescimento tumoral e à invasividade, respectivamente, das células cancerosas. A partir do modelo FKPP também se fez um estudo computacional mais detalhado aplicando diferentes protocolos de tratamentos para analisar seus efeitos sobre o crescimento e desenvolvimento de gliomas. O estudo sugere que um tratamento com maior tempo entre cada dose poderia ser mais ótimo do que um tratamento mais agressivo. Propõe-se também um modelo populacional local do câncer em que se tem em conta o caráter policlonal das células cancerígenas e as interações destas com o sistema imunológico natural e especifico. Neste último modelo se consegui apreciar fenômenos como dormancy state (estado de latência) e escape phase (fase de escape) para valores dos parâmetros correspondentes ao câncer de tipo B-Linfoma BCL1 [2] o qual explica os fenômenos de imunoedição e escape da imunovigilância [3] o qual poderia permitir propor novos protocolos de tratamentos mais apropriados.Depois se fez uma reparametrização do modelo baseado em algumas características mais próprias das células tumorais do tipo glioma e assumindo presença de imunodeficiência com o que se obtém coexistências oscilatórias periódicas tanto da população tumoral assim como das células do sistema imunológico o qual poderia explicar os casos clínicos de remissão e posterior reincidência tumoral. Finalmente se obtiveram baixo certas condições, uma dinâmica caótica na população tumoral o qual poderia explicar os casos clínicos em que se apresentam falta de controlabilidade da doença sobre tudo em pessoas idosas ou com algum quadro clinico que envolve alguma deficiência no funcionamento normal do sistema imunológico. / In this work we studied basic aspects of the dynamics of cancer cell type B-Lymphoma BCL1 and gliomas making strong emphasis in the latter case. We start reviewing some population models of cancer inspired in the work¿s of Lotka and Volterra, which offers a very simple description of the interaction between cancer (prey) and the immune system (Hunter). Subsequently revise a global model space-time based on the equations of Fisher-Kolmogorov-Petrovsky-Piskounov (FKPP) [1] which allowed elucidating the dichotomy between proliferation and strongly associated motility to tumor growth and invasiveness, respectively, of cancer cells. From the FKPP model also made a more detailed computer study applying different treatment protocols to analyze their effects on the growth and development of gliomas. The study suggests that treatment with longer time between each dose could be more optimal than a more aggressive treatment. Is studied also a local population cancer model that takes into account the polyclonal nature of cancer cells, and these interactions with the natural and specific immune system. In the latter model is able to appreciate phenomena as dormancy state and escape phase for values of parameters corresponding to lymphoma cancer BCL1 [2] which explains the phenomena of immunoediting and tumor escape immuno-surveillance [3] allowing elucidating treatments protocols more appropriate. A re-parameterization was made based on some features of tumor cells glioma type and assuming presence of immunodeficiency with that obtained coexistences periodic oscillatory both tumor populations as well as the immune system cells which could explain the clinical cases of remission and subsequent tumor recurrence. Finally obtained under certain conditions, a chaotic dynamics in tumor population which could explain the clinical cases that present lack of controllability of the disease on all in elderly or with some clinical picture involving some deficiency in the normal functioning of the immune system.
80

Algoritmo genético aplicado à determinação da melhor configuração e do menor tamanho amostral na análise da variabilidade espacial de atributos químicos do solo / Genetic algorithm applied to determine the best configuration and the lowest sample size in the analysis of space variability of chemical attributes of soil

Maltauro, Tamara Cantú 21 February 2018 (has links)
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-09-10T17:23:20Z No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-09-10T17:23:20Z (GMT). No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-02-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / It is essential to determine a sampling design with a size that minimizes operating costs and maximizes the results quality throughout a trial setting that involves the study of spatial variability of chemical attributes on soil. Thus, this trial aimed at resizing a sample configuration with the least possible number of points for a commercial area composed of 102 points, regarding the information on spatial variability of soil chemical attributes to optimize the process. Initially, Monte Carlo simulations were carried out, assuming Gaussian, isotropic, and exponential model for semi-variance function and three initial sampling configurations: systematic, simple random and lattice plus close pairs. The Genetic Algorithm (GA) was used to obtain simulated data and chemical attributes of soil, in order to resize the optimized sample, considering two objective-functions. They are based on the efficiency of spatial prediction and geostatistical model estimation, which are respectively: maximization of global accuracy precision and minimization of functions based on Fisher information matrix. It was observed by the simulated data that for both objective functions, when the nugget effect and range varied, samplings usually showed the lowest values of objectivefunction, whose nugget effect was 0 and practical range was 0.9. And the increase in practical range has generated a slight reduction in the number of optimized sampling points for most cases. In relation to the soil chemical attributes, GA was efficient in reducing the sample size with both objective functions. Thus, sample size varied from 30 to 35 points in order to maximize global accuracy precision, which corresponded to 29.41% to 34.31% of the initial mesh, with a minimum spatial prediction similarity to the original configuration, equal to or greater than 85%. It is noteworthy that such data have reflected on the optimization process, which have similarity between the maps constructed with sample configurations: original and optimized. Nevertheless, the sample size of the optimized sample varied from 30 to 40 points to minimize the function based on Fisher information matrix, which corresponds to 29.41% and 39.22% of the original mesh, respectively. However, there was no similarity between the constructed maps when considering the initial and optimum sample configuration. For both objective functions, the soil chemical attributes showed mild spatial dependence for the original sample configuration. And, most of the attributes showed mild or strong spatial dependence for optimum sample configuration. Thus, the optimization process was efficient when applied to both simulated data and soil chemical attributes. / É necessário determinar um esquema de amostragem com um tamanho que minimize os custos operacionais e maximize a qualidade dos resultados durante a montagem de um experimento que envolva o estudo da variabilidade espacial de atributos químicos do solo. Assim, o objetivo deste trabalho foi redimensionar uma configuração amostral com o menor número de pontos possíveis para uma área comercial composta por 102 pontos, considerando a informação sobre a variabilidade espacial de atributos químicos do solo no processo de otimização. Inicialmente, realizaram-se simulações de Monte Carlo, assumindo as variáveis estacionárias Gaussiana, isotrópicas, modelo exponencial para a função semivariância e três configurações amostrais iniciais: sistemática, aleatória simples e lattice plus close pairs. O Algoritmo Genético (AG) foi utilizado para a obtenção dos dados simulados e dos atributos químicos do solo, a fim de se redimensionar a amostra otimizada, considerando duas funções-objetivo. Essas estão baseadas na eficiência quanto à predição espacial e à estimação do modelo geoestatístico, as quais são respectivamente: a maximização da medida de acurácia exatidão global e a minimização de funções baseadas na matriz de informação de Fisher. Observou-se pelos dados simulados que, para ambas as funções-objetivo, quando o efeito pepita e o alcance variaram, em geral, as amostragens apresentaram os menores valores da função-objetivo, com efeito pepita igual a 0 e alcance prático igual a 0,9. O aumento do alcance prático gerou uma leve redução do número de pontos amostrais otimizados para a maioria dos casos. Em relação aos atributos químicos do solo, o AG, com ambas as funções-objetivo, foi eficiente quanto à redução do tamanho amostral. Para a maximização da exatidão global, tem-se que o tamanho amostral da nova amostra reduzida variou entre 30 e 35 pontos que corresponde respectivamente a 29,41% e a 34,31% da malha inicial, com uma similaridade mínima de predição espacial, em relação à configuração original, igual ou superior a 85%. Vale ressaltar que tais dados refletem no processo de otimização, os quais apresentam similaridade entres os mapas construídos com as configurações amostrais: original e otimizada. Todavia, o tamanho amostral da amostra otimizada variou entre 30 e 40 pontos para minimizar a função baseada na matriz de informaçãode Fisher, a qual corresponde respectivamente a 29,41% e 39,22% da malha original. Mas, não houve similaridade entre os mapas elaborados quando se considerou a configuração amostral inicial e a otimizada. Para ambas as funções-objetivo, os atributos químicos do solo apresentaram moderada dependência espacial para a configuração amostral original. E, a maioria dos atributos apresentaram moderada ou forte dependência espacial para a configuração amostral otimizada. Assim, o processo de otimização foi eficiente quando aplicados tanto nos dados simulados como nos atributos químicos do solo.

Page generated in 0.0464 seconds