Spelling suggestions: "subject:"random wall"" "subject:"fandom wall""
311 |
Signal processing for biologically-inspired gradient source localization and DNA sequence analysisRosen, Gail L. 12 July 2006 (has links)
Biological signal processing can help us gain knowledge about biological complexity, as well as using this knowledge to engineer better systems. Three areas are identified as critical to understanding biology: 1) understanding DNA, 2) examining the overall biological function and 3) evaluating these systems in environmental (ie: turbulent) conditions.
DNA is investigated for coding structure and redundancy, and a new tandem repeat region, an indicator of a neurodegenerative disease, is discovered. The linear algebraic framework can be used for further analysis and techniques. The work illustrates how signal processing is a tool to reverse engineer biological systems, and how our better understanding of biology can improve engineering designs.
Then, the way a single-cell mobilizes in response to a chemical gradient, known as chemotaxis, is examined. Inspiration from receptor clustering in chemotaxis combined with a Hebbian learning method is shown to improve a gradient-source (chemical/thermal) localization algorithm. The algorithm is implemented, and its performance is evaluated in diffusive and turbulent environments. We then show that sensor cross-correlation can be used in solving chemical localization in difficult turbulent scenarios. This leads into future techniques which can be designed for gradient source tracking. These techniques pave the way for use of biologically-inspired sensor networks in chemical localization.
|
312 |
Time and Space-Efficient Algorithms for Mobile Agents in an Anonymous NetworkKosowski, Adrian 26 September 2013 (has links) (PDF)
Computing with mobile agents is rapidly becoming a topic of mainstream research in the theory of distributed computing. The main research questions undertaken in this study concern the feasibility of solving fundamental tasks in an anonymous network, subject to limitations on the resources available to the agent. The considered challenges include: exploring a graph by means of an agent with limited memory, discovery of the network topology, and attempting to meet with another agent in another network (rendezvous). The constraints imposed on the agent include the number of moves which the agent is allowed to perform in the network, the amount of state memory available to the agent, the ability of the agent to communicate with other agents, as well as its a priori knowledge of the network topology or of global parameters.
|
313 |
Sistemas técnicos de trading no mercado de ações brasileiro: testando a hipótese de eficiência de mercado em sua forma fraca e avaliando se análise técnica agrega valorSerafini, Daniel Guedine 28 January 2010 (has links)
Made available in DSpace on 2010-04-20T21:00:36Z (GMT). No. of bitstreams: 4
Daniel Guedini.pdf.jpg: 2593 bytes, checksum: b9f66c6e9af5d94ff028f0092fbf9114 (MD5)
Daniel Guedini.pdf.txt: 116251 bytes, checksum: 11ea61b001a20cef77fcf9db8c74c28d (MD5)
license.txt: 4712 bytes, checksum: 4dea6f7333914d9740702a2deb2db217 (MD5)
Daniel Guedini.pdf: 1363558 bytes, checksum: 4e6b49114a354eeaa85c0d6710a75c84 (MD5)
Previous issue date: 2010-01-28T00:00:00Z / Diante do inédito momento vivido pela economia brasileira e, especialmente, pela bolsa de valores nacional, principalmente após a obtenção do grau de investimento pelo Brasil, este trabalho aborda um tema que ganhou um enorme espaço na mídia atual que é a análise técnica. A partir de uma amostra de 37 ações listadas na Bolsa de Valores de São Paulo no período compreendido entre janeiro de 1999 e agosto de 2009, este trabalho examina se a análise técnica agrega valor 'as decisões de investimentos. Através da elaboração de intervalos de confiança, construídos através da técnica de Bootstrap de inferência amostral, e consistentes com a hipótese nula de eficiência de mercado na sua forma fraca, foram testados 4 sistemas técnicos de trading. Mais especificamente, obteve-se os resultados de cada sistema aplicado às series originais dos ativos. Então, comparou-se esses resultados com a média dos resultados obtidos quando os mesmos sistemas foram aplicados a 1000 séries simuladas, segundo um random walk, de cada ativo. Caso os mercados sejam eficientes em sua forma fraca, não haveria nenhuma razão para se encontrar estratégias com retornos positivos, baseando-se apenas nos valores históricos dos ativos. Ou seja, não haveria razão para os resultados das séries originais serem maiores que os das séries simuladas. Os resultados empíricos encontrados sugeriram que os sistemas testados não foram capazes de antecipar o futuro utilizando-se apenas de dados passados. Porém, alguns deles geraram retornos expressivos e só foram superados pelas séries simuladas em aproximadamente 25% da amostra, indicando que a análise técnica tem sim seu valor. / Faced with unprecedented time lived by Brazilian`s economy and, especially, the national stock exchange, mainly after obtaining the investment grade for Brazil, this paper addresses a theme that has deserved a huge space in the mainstream media that is technical analysis. From a sample of 37 stocks listed on the Stock Exchange of Sao Paulo in the period between January 1999 and August 2009, this paper examines if the technical analysis may or may not add value to investment decisions. Through the development of confidence intervals, constructed using the technique of Bootstrap sample inference, and consistent with the null hypothesis of market efficiency in its weak form, we tested 4 technical systems of trading. More specifically, we obtained the results of each system applied to the original series of the assets. Then we compared these results with the average of the results obtained when the same systems were applied to 1000 simulated series, according to a random walk, of each asset. If markets are efficient in its weak form, there would be no reason to find strategies with positive returns based only on historical values of assets. That is, there would be no reason for the results of the original series to be larger than those of the simulated series. The empirical results found here suggested that the systems tested were unable to anticipate the future using only past data. However, some of them have generated significant returns and were surpassed only by the series simulated in approximately 25% of the sample, indicating that technical analysis does have value.
|
314 |
Sélection de modèles robuste : régression linéaire et algorithme à sauts réversiblesGagnon, Philippe 10 1900 (has links)
No description available.
|
315 |
Machine learning via dynamical processes on complex networks / Aprendizado de máquina via processos dinâmicos em redes complexasThiago Henrique Cupertino 20 December 2013 (has links)
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases / A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
|
316 |
Estrutura fractal em séries temporais: uma investigação quanto à hipótese de passeio aleatório no mercado à vista de commodities agrícolas brasileiroSantos, Alessandra Gazzoli 14 August 2013 (has links)
Submitted by Alessandra Gazzoli (alessandra.gazzoli-santos@itau-unibanco.com.br) on 2013-09-13T18:41:48Z
No. of bitstreams: 1
AlessandraGazzoli_EstruturaFractal.pdf: 2516271 bytes, checksum: 6f74e2f1266b906cb221034fae87335f (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-09-13T18:44:12Z (GMT) No. of bitstreams: 1
AlessandraGazzoli_EstruturaFractal.pdf: 2516271 bytes, checksum: 6f74e2f1266b906cb221034fae87335f (MD5) / Made available in DSpace on 2013-09-13T18:45:22Z (GMT). No. of bitstreams: 1
AlessandraGazzoli_EstruturaFractal.pdf: 2516271 bytes, checksum: 6f74e2f1266b906cb221034fae87335f (MD5)
Previous issue date: 2013-08-14 / Economic variables are often governed by dynamic and non-linear processes that can originate long-term relationship and non-periodic and non-cyclical patterns with abrupt trend changes. Commodity prices exhibit this type of behavior and the peculiarities of those markets could generate fractionally integrated time series, whose singularities could not be properly captured by the traditional analytic models based on the efficient market hypothesis and random walk processes. Therefore, this study has investigated the presence of fractal structures on some very important Brazilian commodity spot markets such as coffee, cattle, sugar, soybean and calf. Some traditional techniques were used as well as other specific for fractal time series analysis, such as rescaled range (R/S) analysis, different fractal hypothesis tests and ARFIMA and FIGARCH models. The results showed that the drift component has not shown fractal behavior, except for the calf series, however, volatility has demonstrated fractal behavior for all the commodities that were analyzed. / As variáveis econômicas são frequentemente governadas por processos dinâmicos e não-lineares que podem gerar relações de dependência de longo prazo e padrões cíclicos não-periódicos com mudanças abruptas de tendências. Para o caso dos preços agrícolas este comportamento não é diferente e as peculiaridades destes mercados podem gerar séries temporais fracionalmente integradas, cujas singularidades não seriam adequadamente capturadas pelos tradicionais modelos analíticos fundamentados na hipótese dos mercados eficientes e de passeio aleatório. Sendo assim, o presente estudo buscou investigar a presença de estruturas fractais no mercado à vista de algumas das principais commodities agrícolas brasileiras: café, boi gordo, açúcar, milho, soja e bezerro. Foram empregadas técnicas tradicionais e específicas para a análise de séries temporais fractais como a análise de R/S e a aplicação de modelos das famílias ARFIMA e FIGARCH. Os resultados indicaram que, com exceção do bezerro, o componente de drift destas séries não apresentou comportamento fractal, ao contrário do observado para o componente da volatilidade, que apresentou aspecto de estrutura fractal para todas as commodities analisadas.
|
317 |
Modélisation d'un phénomène pluvieux local et analyse de son transfert vers la nappe phréatique / Modeling a local phenomenon rainy and analysis of its transfer to groundwaterGolder, Jacques 24 July 2013 (has links)
Dans le cadre des recherches de la qualité des ressources en eau, l’étude du processus de transfert de masse du sol vers la nappe phréatique constitue un élément primordial pour la compréhension de la pollution de cette dernière. En effet, les éléments polluants solubles à la surface (produits liés aux activités humaines tels engrais, pesticides...) peuvent transiter vers la nappe à travers le milieu poreux qu’est le sol. Ce scénario de transfert de pollution repose sur deux phénomènes : la pluie qui génère la masse d’eau à la surface et la dispersion de celle-ci à travers le milieu poreux. La dispersion de masse dans un milieu poreux naturel comme le sol forme un sujet de recherche vaste et difficile aussi bien au plan expérimental que théorique. Sa modélisation constitue une préoccupation du laboratoire EMMAH, en particulier dans le cadre du projet Sol Virtuel dans lequel un modèle de transfert (modèle PASTIS) a été développé. Le couplage de ce modèle de transfert avec en entrée un modèle décrivant la dynamique aléatoire de la pluie est un des objectifs de la présente thèse. Ce travail de thèse aborde cet objectif en s’appuyant d’une part sur des résultats d’observations expérimentaux et d’autre part sur de la modélisation inspirée par l’analyse des données d’observation. La première partie du travail est consacrée à l’élaboration d’un modèle stochastique de pluie. Le choix et la nature du modèle sont basés sur les caractéristiques obtenus à partir de l’analyse de données de hauteur de pluie recueillies sur 40 ans (1968-2008) sur le Centre de Recherche de l’INRA d’Avignon. Pour cela, la représentation cumulée des précipitations sera assimilée à une marche aléatoire dans laquelle les sauts et les temps d’attente entre les sauts sont respectivement les amplitudes et les durées aléatoires entre deux occurrences d’événements de pluie. Ainsi, la loi de probabilité des sauts (loi log-normale) et celle des temps d’attente entre les sauts (loi alpha-stable) sont obtenus en analysant les lois de probabilité des amplitudes et des occurrences des événements de pluie. Nous montrons alors que ce modèle de marche aléatoire tend vers un mouvement brownien géométrique subordonné en temps (quand les pas d’espace et de temps de la marche tendent simultanément vers zéro tout en gardant un rapport constant) dont la loi de densité de probabilité est régie par une équation de Fokker Planck fractionnaire (FFPE). Deux approches sont ensuite utilisées pour la mise en œuvre du modèle. La première approche est de type stochastique et repose sur le lien existant entre le processus stochastique issu de l’équation différentielle d’Itô et la FFPE. La deuxième approche utilise une résolution numérique directe par discrétisation de la FFPE. Conformément à l’objectif principal de la thèse, la seconde partie du travail est consacrée à l’analyse de la contribution de la pluie aux fluctuations de la nappe phréatique. Cette analyse est faite sur la base de deux relevés simultanées d’observations de hauteurs de pluie et de la nappe phréatique sur 14 mois (février 2005-mars 2006). Une étude statistique des liens entre les signaux de pluie et de fluctuations de la nappe est menée comme suit : Les données de variations de hauteur de nappe sont analysées et traitées pour isoler les fluctuations cohérentes avec les événements de pluie. Par ailleurs, afin de tenir compte de la dispersion de masse dans le sol, le transport de la masse d’eau pluviale dans le sol sera modélisé par un code de calcul de transfert (modèle PASTIS) auquel nous appliquons en entrée les données de hauteurs de pluie mesurées. Les résultats du modèle permettent entre autre d’estimer l’état hydrique du sol à une profondeur donnée (ici fixée à 1.6m). Une étude de la corrélation entre cet état hydrique et les fluctuations de la nappe sera ensuite effectuée en complément à celle décrite ci-dessus pour illustrer la possibilité de modéliser l’impact de la pluie sur les fluctuations de la nappe / Within the research quality of water resources, the study of the process of mass transfer from soil to groundwater is a key element for understanding the pollution of the latter. Indeed, soluble contaminants to the surface (related to human activities such fertilizers, pesticides products ...) can transit to the web through the porous medium that is the ground. This scenario transfer pollution based on two phenomena: the rain that generates the body of water to the dispersion and the surface thereof through the porous medium. The dispersion of mass in a natural porous medium such as soil forms a subject of extensive research and difficult both experimental and theoretical grounds. Its modeling is a concern EMMAH laboratory, particularly in the context of Virtual Sol project in which a transfer model (PASTIS model) was developed. The coupling of this transfer model with input a model describing the dynamics of random rain is one of the objectives of this thesis. This thesis addresses this goal by relying in part on the results of experimental observations and also on modeling inspired by the analysis of observational data. The first part of the work is devoted to the development of a stochastic model of rain. The choice and nature of the model are based on the features obtained from the analysis of data collected rainfall over 40 years (1968-2008) on the Research Centre INRA Avignon. For this, the cumulative rainfall representation will be treated as a random walk in which the jumps and waiting times between jumps are the amplitudes and durations between two random occurrences of rain events. Thus, the probability jumps (log-normal distribution) and that of waiting between jumps (Law alpha-stable) time is obtained by analyzing the laws of probability amplitudes and occurrences of rain events. We show that the random walk model tends towards a subordinate in time geometric Brownian motion (when space step and time step walking simultaneously tend to zero while maintaining a constant ratio), the law of probability density is governed by a Fokker Planck fractional (FFPE). Two approaches are then used to implement the model. The first approach is based on stochastic type and the relationship between the stochastic process derived from the differential equation of Itô and FFPE. The second approach uses a direct numerical solution by discretization of the FFPE. Accordance with the main objective of the thesis, the second part of the work is devoted to the analysis of the contribution of rain to fluctuations in groundwater. We approach this analysis on the basis of two simultaneous records of observations of rainfall amounts and groundwater over 14 months (February 2005-March 2006). A statistical study of the relationship between the signals of rain and fluctuating water will be conducted. Data sheet height variations are analyzed and processed to isolate coherent fluctuations with rain events. In addition, to take account of the mass dispersion in the soil, the mass transport of storm water in the soil layer is modeled by a calculation code transfer (PASTIS model) which we apply input data measured heights of rain. The model results allow between another estimate soil water status at a given depth (here set at 1.6m). A study of the correlation between the water status and fluctuating water will then be performed in addition to that described above to illustrate the ability to model the impact of rain on the water table fluctuations
|
318 |
Návrh a identifikace rozšířeného modelu MEMS gyroskopu / An Extended Model of a MEMS Gyroscope: Design and IdentificationVágner, Martin January 2016 (has links)
The thesis is aimed on measurement and modeling of MEMS gyroscopes based on input-output characteristics. The first part briefs the state of the art. The second part is dedicated to measurement methodology. Critical points and sources of uncertainty are discussed and evaluated using measurements or simulations. The last part shows key characteristics of MEMS gyroscopes based on the survey of a group of different sensor types. The results have revealed significant influence of supply voltage that causes bias drift of the gyroscope and bias drift of the internal temperature sensor. The error can be comparable to temperature drift; however, this effect is not addressed in the literature. The second observed effect is temperature dependency of angle random walk. In the last part, a general model of a MEMS gyroscope is rewritten to reflect observed effects. Moreover, the structure is selected to be easily extendable and the coefficients are expressed to allow a comparison of nominal parameters of different sensors.
|
319 |
Structural Similarity: Applications to Object Recognition and ClusteringCurado, Manuel 03 September 2018 (has links)
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD. / Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
|
320 |
Access Blood Flow Measurement Using AngiographyKoirala, Nischal 26 September 2018 (has links)
No description available.
|
Page generated in 0.0718 seconds