• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 94
  • 19
  • 17
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 396
  • 396
  • 165
  • 129
  • 102
  • 100
  • 93
  • 64
  • 60
  • 53
  • 53
  • 49
  • 42
  • 41
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Emprego de comitê de máquinas para segmentação da íris

Schneider, Mauro Ulisses 23 August 2010 (has links)
Made available in DSpace on 2016-03-15T19:37:30Z (GMT). No. of bitstreams: 1 Mauro Ulisses Schneider.pdf: 1455677 bytes, checksum: 6eba28391f8f6910fbf5457a57119bd3 (MD5) Previous issue date: 2010-08-23 / Fundo Mackenzie de Pesquisa / The use of biometric systems has been widely stimulated by both the government and private entities to replace or improve traditional security systems. Biometric systems are becoming increasingly indispensable to protecting life and property, mainly due to its robustness, reliability, difficult to counterfeit and fast authentication. In real world applications, the devices for image acquisition and the environment are not always controlled and may under certain circumstances produce noisy images or with large variations in tonality, texture, geometry, hindering segmentation and consequently the authentication of the an individual. To deal effectively with such problems, this dissertation investigates the possibility of using committee machines combined with digital image processing techniques for iris segmentation. The components employed in the composition of the committee machines are support vector clustering, k-means and self organizing maps. In order to evaluate the performance of the tools developed in this dissertation, the experimental results obtained are compared with related works reported in the literature. Experiments on publicity available UBIRIS database indicate that committee machine can be successfully applied to the iris segmentation. / A utilização de sistemas biométricos vem sendo amplamente; incentivados pelo governo e entidades privadas a fim de substituir ou melhorar os sistemas de segurança tradicionais. Os sistemas biométricos são cada vez mais indispensáveis para proteger vidas e bens, sendo robustos, confiáveis, de difícil falsificação e rápida autenticação. Em aplicações de mundo real, os dispositivos de aquisição de imagem e o ambiente nem sempre são controlados, podendo em certas circunstâncias produzir imagens ruidosas ou com grandes variações na tonalidade, textura, geometria, dificultando a sua segmentação e por conseqüência a autenticação do indivíduo. Para lidar eficazmente com tais problemas, nesta dissertação é estudado o emprego de comitês de máquinas em conjunto com técnicas de processamento de imagens digitais para a segmentação da íris. Os componentes estudados na composição do comitê de máquinas são agrupamento por vetores-suporte, k-means e mapas auto- organizáveis. Para a avaliação do desempenho das ferramentas desenvolvidas neste trabalho, os resultados obtidos são comparados com trabalhos relacionados na literatura. Foi utilizada a base de dados pública UBIRIS disponível na internet.
362

Mapas auto-organizáveis na construção de recursos de aprendizagem adaptativos: uma aplicação no ensino de música

Ferreira, Fabiano Rodrigues 29 February 2008 (has links)
Made available in DSpace on 2016-04-18T21:39:46Z (GMT). No. of bitstreams: 3 Fabiano Rodrigues Ferreira1.pdf: 1311600 bytes, checksum: 0965dfbc230f89f36675c671139868c2 (MD5) Fabiano Rodrigues Ferreira2.pdf: 3285153 bytes, checksum: a89026ec6080ef90e4d32d6b1d3ecf78 (MD5) Fabiano Rodrigues Ferreira3.pdf: 3363149 bytes, checksum: 1aeb3cbfdd346437db3d3527f88d4d81 (MD5) Previous issue date: 2008-02-29 / Fundo Mackenzie de Pesquisa / Brazilian educational scenario suffers from the lack of incentive to a musical apprenticeship that leads students to reflect about their own reality. Due to the actual hegemonic politics, educational processes, in general, are characterized by diminishing student s potential for reflection, in a society that priorizes a strict technicist teaching, as contemporary society is. As a result, students are often not able to stablish relationships between what was learned and their own lives. Thus, it is necessary to have some mechanisms that could help the adaptation to student s cultural context, leading to a meaningful ethnic learning. Learning objects concept can be understood as examples of technological resources that appear in a way to organize and structure digital educational data. Such concept, althought is a new paradigm into educational ambit, has been widely used on educational systems by constant & crescent deliver of learning objects by Internet. In this way, this work focuses an adaptive learning object architecture, applied to the learning process of Brazilian musical rhythms, as an example. Such objects are dynamically retrieved from repositories through techniques based on self-organizing maps. Objects are selected in order to create learning resources adequate to some desirable adaptivity factor, as previous knowledge, learning styles or cultural aspects. / O cenário educacional brasileiro sofre com a falta de incentivo a um aprendizado musical que realmente faça o educando refletir sobre sua realidade. Devido à política hegemônica atual, os processos educativos, em geral, estão imersos numa alienação descontextualizante e no assistencialismo. O poder de pensamento e reflexão do educando acaba diminuindo consideravelmente numa sociedade que preza mais pelo ensino puramente tecnicista do que pelo incentivo à reflexão, como é o caso da sociedade contemporânea. O resultado disso acaba sendo uma inorganicidade educacional que faz com que o aluno não faça relação daquilo que aprendeu com sua própria vida. Torna-se necessário, portanto, estabelecer mecanismos que auxiliem a adaptação ao contexto cultural do mesmo, levando a uma etnoaprendizagem significativa e contextualizada. Entendem-se os objetos de aprendizagem como exemplos de recursos tecnológicos que surgiram como forma de organizar e estruturar materiais educacionais digitais. Tal conceito, embora seja um paradigma novo no âmbito da educação tem sido amplamente utilizado nos sistemas educacionais atuais através da constante e crescente disponibilização dos mesmos pela Internet. Dessa forma, este trabalho enfoca uma arquitetura de objetos de aprendizagem digitais adaptativos com uma aplicação no processo de aprendizagem de ritmos musicais brasileiros, como exemplo de utilização. Tais objetos são dinamicamente recuperados a partir de repositórios, através de técnicas baseadas em mapas auto-organizáveis. Objetos são selecionados de maneira a criar recursos de aprendizagem que sejam adequados a algum fator de adaptabilidade desejável para o contexto, como conhecimentos prévios, estilos de aprendizagem ou aspectos culturais.
363

Intelligent information processing in building monitoring systems and applications

Skön, J.-P. (Jukka-Pekka) 10 November 2015 (has links)
Abstract Global warming has set in motion a trend for cutting energy costs to reduce the carbon footprint. Reducing energy consumption, cutting greenhouse gas emissions and eliminating energy wastage are among the main goals of the European Union (EU). The buildings sector is the largest user of energy and CO2 emitter in the EU, estimated at approximately 40% of the total consumption. According to the International Panel on Climate Change, 30% of the energy used in buildings could be reduced with net economic benefits by 2030. At the same time, indoor air quality is recognized more and more as a distinct health hazard. Because of these two factors, energy efficiency and healthy housing have become active topics in international research. The main aims of this thesis were to study and develop a wireless building monitoring and control system that will produce valuable information and services for end-users using computational methods. In addition, the technology developed in this thesis relies heavily on building automation systems (BAS) and some parts of the concept termed the “Internet of Things” (IoT). The data refining process used is called knowledge discovery from data (KDD) and contains methods for data acquisition, pre-processing, modeling, visualization and interpreting the results and then sharing the new information with the end-users. In this thesis, four examples of data analysis and knowledge deployment are presented. The results of the case studies show that innovative use of computational methods provides a good basis for researching and developing new information services. In addition, the data mining methods used, such as regression and clustering completed with efficient data pre-processing methods, have a great potential to process a large amount of multivariate data effectively. The innovative and effective use of digital information is a key element in the creation of new information services. The service business in the building sector is significant, but plenty of new possibilities await capable and advanced companies or organizations. In addition, end-users, such as building maintenance personnel and residents, should be taken into account in the early stage of the data refining process. Furthermore, more advantages can be gained by courageous co-operation between companies and organizations, by utilizing computational methods for data processing to produce valuable information and by using the latest technologies in the research and development of new innovations. / Tiivistelmä Rakennus- ja kiinteistösektori on suurin fossiilisilla polttoaineilla tuotetun energian käyttäjä. Noin 40 prosenttia kaikesta energiankulutuksesta liittyy rakennuksiin, rakentamiseen, rakennusmateriaaleihin ja rakennuksien ylläpitoon. Ilmastonmuutoksen ehkäisyssä rakennusten energiankäytön vähentämisellä on suuri merkitys ja rakennuksissa energiansäästöpotentiaali on suurin. Tämän seurauksena yhä tiiviimpi ja energiatehokkaampi rakentaminen asettaa haasteita hyvän sisäilman laadun turvaamiselle. Näistä seikoista johtuen sisäilman laadun tutkiminen ja jatkuvatoiminen mittaaminen on tärkeää. Väitöskirjan päätavoitteena on kuvata kehitetty energiankulutuksen ja sisäilman laadun monitorointijärjestelmä. Järjestelmän tuottamaa mittaustietoa on jalostettu eri loppukäyttäjiä palvelevaan muotoon. Tiedonjalostusprosessi koostuu tiedon keräämisestä, esikäsittelystä, tiedonlouhinnasta, visualisoinnista, tulosten tulkitsemisesta ja oleellisen tiedon välittämisestä loppukäyttäjille. Aineiston analysointiin on käytetty tiedonlouhintamenetelmiä, kuten esimerkiksi klusterointia ja ennustavaa mallintamista. Väitöskirjan toisena tavoitteena on tuoda esille jatkuvatoimiseen mittaamiseen liittyviä haasteita sekä rohkaista yrityksiä ja organisaatioita käyttämään tietovarantoja monipuolisemmin ja tehokkaammin. Väitöskirja pohjautuu viiteen julkaisuun, joissa kuvataan kehitetty monitorointijärjestelmä, osoitetaan tiedonjalostusprosessin toimivuus erilaisissa tapauksissa ja esitetään esimerkkejä kuhunkin prosessivaiheeseen soveltuvista laskennallisista menetelmistä. Julkaisuissa on kuvattu energiankulutuksen ja sisäilman laadun informaatiopalvelu sekä sisäilman laatuun liittyviä data-analyysejä omakoti- ja kerrostaloissa sekä koulurakennuksissa. Innovatiivinen digitaalisen tiedon hyödyntäminen on avainasemassa kehitettäessä uusia informaatiopalveluita. Kiinteistöalalle on kehitetty lukuisia informaatioon pohjautuvia palveluita, mutta ala tarjoaa edelleen hyviä liiketoimintamahdollisuuksia kyvykkäille ja kehittyneille yrityksille sekä organisaatioille.
364

Définition d'un substrat computationnel bio-inspiré : déclinaison de propriétés de plasticité cérébrale dans les architectures de traitement auto-adaptatif / Design of a bio-inspired computing substrata : hardware plasticity properties for self-adaptive computing architectures

Rodriguez, Laurent 01 December 2015 (has links)
L'augmentation du parallélisme, sur des puces dont la densité d'intégration est en constante croissance, soulève un certain nombre de défis tels que le routage de l'information qui se confronte au problème de "goulot d'étranglement de données", ou la simple difficulté à exploiter un parallélisme massif et grandissant avec les paradigmes de calcul modernes issus pour la plupart, d'un historique séquentiel.Nous nous inscrivons dans une démarche bio-inspirée pour définir un nouveau type d'architecture, basée sur le concept d'auto-adaptation, afin de décharger le concepteur au maximum de cette complexité. Mimant la plasticité cérébrale, cette architecture devient capable de s'adapter sur son environnement interne et externe de manière homéostatique. Il s'inscrit dans la famille du calcul incorporé ("embodied computing") car le substrat de calcul n'est plus pensé comme une boite noire, programmée pour une tâche donnée, mais est façonné par son environnement ainsi que par les applications qu'il supporte.Dans nos travaux, nous proposons un modèle de carte neuronale auto-organisatrice, le DMADSOM (pour Distributed Multiplicative Activity Dependent SOM), basé sur le principe des champs de neurones dynamiques (DNF pour "Dynamic Neural Fields"), pour apporter le concept de plasticité à l'architecture. Ce modèle a pour originalité de s'adapter sur les données de chaque stimulus sans besoin d'un continuum sur les stimuli consécutifs. Ce comportement généralise les cas applicatifs de ce type de réseau car l'activité est toujours calculée selon la théorie des champs neuronaux dynamique. Les réseaux DNFs ne sont pas directement portables sur les technologies matérielles d'aujourd'hui de part leurs forte connectivité. Nous proposons plusieurs solutions à ce problème. La première consiste à minimiser la connectivité et d'obtenir une approximation du comportement du réseau par apprentissage sur les connexions latérales restantes. Cela montre un bon comportement dans certain cas applicatifs. Afin de s'abstraire de ces limitations, partant du constat que lorsqu'un signal se propage de proche en proche sur une topologie en grille, le temps de propagation représente la distance parcourue, nous proposons aussi deux méthodes qui permettent d'émuler, cette fois, l'ensemble de la large connectivité des Neural Fields de manière efficace et proche des technologies matérielles. Le premier substrat calcule les potentiels transmis sur le réseau par itérations successives en laissant les données se propager dans toutes les directions. Il est capable, en un minimum d'itérations, de calculer l'ensemble des potentiels latéraux de la carte grâce à une pondération particulière de l'ensemble des itérations.Le second passe par une représentation à spikes des potentiels qui transitent sur la grille sans cycles et reconstitue l'ensemble des potentiels latéraux au fil des itérations de propagation.Le réseau supporté par ces substrats est capable de caractériser les densités statistiques des données à traiter par l'architecture et de contrôler, de manière distribuée, l'allocation des cellules de calcul. / The increasing degree of parallelism on chip which comes from the always increasing integration density, raises a number of challenges such as routing information that confronts the "bottleneck problem" or the simple difficulty to exploit massive parallelism thanks to modern computing paradigms which derived mostly from a sequential history.In order to discharge the designer of this complexity, we design a new type of bio-inspired self-adaptive architecture. Mimicking brain plasticity, this architecture is able to adapt to its internal and external environment and becomes homeostatic. Belonging to the embodied computing theory, the computing substrate is no longer thought of as a black box, programmed for a given task, but is shaped by its environment and by applications that it supports.In our work, we propose a model of self-organizing neural map, DMADSOM (for Distributed Multiplicative Activity Dependent SOM), based on the principle of dynamic neural fields (DNF for "Dynamic Neural Fields"), to bring the concept of hardware plasticity. This model is able to adapt the data of each stimulus without need of a continuum on consecutive stimuli. This behavior generalizes the case of applications of such networks. The activity remains calculated using the dynamic neural field theory. The DNFs networks are not directly portable onto hardware technology today because of their large connectivity. We propose models that bring solutions to this problem. The first is to minimize connectivity and to approximate the global behavior thanks to a learning rule on the remaining lateral connections. This shows good behavior in some application cases. In order to reach the general case, based on the observation that when a signal travels from place to place on a grid topology, the delay represents the distance, we also propose two methods to emulate the whole wide connectivity of the Neural Field with respect to hardware technology constraints. The first substrate calculates the transmitted potential over the network by iteratively allowing the data to propagate in all directions. It is capable, in a minimum of iterations, to compute the lateral potentials of the map with a particular weighting of all iterations.The second involves a spike representation of the synaptic potential and transmits them on the grid without cycles. This one is hightly customisable and allows a very low complexity while still being capable to compute the lateral potentials.The network supported, by these substrates, is capable of characterizing the statistics densities of the data to be processed by the architecture, and to control in a distributed manner the allocation of computation cells.
365

La prévision des périodes de stress fiscal : le rôle des indicateurs fiscaux, financiers et de gouvernance / Predicting fiscal stress events : the role of fiscal, financial and governance indicators

Cergibozan, Raif 12 December 2018 (has links)
L’Europe a subi la crise la plus sévère de sa récente histoire à la suite de la crise financière globale de 2008. C’est pourquoi cette thèse a l’objectif d’identifier de façon empirique les déterminants de cette crise dans le cadre de 15 principaux membres de l’UE. Dans ce sens, nous développons d’abord un index de pression fiscale continu, contrairement aux travaux empiriques précédents, afin d’identifier des périodes de crise dans les pays UE-15 de 2003 à 2015. Ensuite, nous utilisons trois différentes techniques d’estimation, à savoir Cartes auto-organisatrices, Logit et Markov. Nos résultats d’estimation démontrent que notre indicateur de crise identifie le timing et la durée de la crise de dette dans chacun des pays de UE-15. Résultats empiriques indiquent également que l’occurrence de la crise de dette dans l’UE-15 est la conséquence de la détérioration de balances macroéconomiques et financières sachant que les variables comme le ratio des prêts non-performants sur les crédits totaux du secteur bancaire, la croissance du PIB, chômage, balance primaire / PIB, le solde ajusté du cycle PIB. De plus, variables démontrant la qualité de gouvernance tel que participation et responsabilisation, qualité de la réglementation, et de l'efficacité gouvernementale, jouent également un rôle important dans l’occurrence et sur la durée de la crise de dette dans le cadre de l’UE-15. Étant donne que les résultats économétriques indiquent l’importance de la détérioration fiscale dans l’occurrence de la crise de dette européenne, nous testons la convergence fiscale des pays membre de l’UE. Les résultats montrent que Portugal, Irlande, Italie, Grèce et Espagne diverge des autres pays de l’UE-15 en termes de dette publique / PIB alors qu’ils convergent, à part la Grèce, avec les autres pays membres de l’UE-15 en termes de déficit budgétaires / PIB. / Europe went through the most severe economic crisis of its recent history following the global financial crisis of 2008. Hence, this thesis aims to empirically identify the determinants of this crisis within the framework of 15 core EU member countries (EU-15). To do so, the study develops a continuous fiscal stress index, contrary to previous empirical studies that tend to use event-based crisis indicators, which identifies the debt crises in the EU-15 and the study employs three different estimation techniques, namely Self-Organizing Map, Multivariate Logit and Panel Markov Regime Switching models. Our estimation results show first that the study identifies correctly the time and the length of the debt crisis in each EU-15-member country by developing a fiscal stress index. Empirical results also indicate, via three different models, that the debt crisis in the EU-15 is the consequence of deterioration of both financial and macroeconomic variables such as nonperforming loans over total loans, GDP growth, unemployment rates, primary balance over GDP, and cyclically adjusted balance over GDP. Besides, variables measuring governance quality, such as voice and accountability, regulatory quality, and government effectiveness, also play a significant role in the emergence and the duration of the debt crisis in the EU-15. As the econometric results clearly indicate the importance of fiscal deterioration on the occurrence of the European debt crisis, this study also aims to test the fiscal convergence among the EU member countries. The results indicate that Portugal, Ireland, Italy, Greece, and Spain diverge from other EU-15 countries in terms of public debt-to-GDP ratio. In addition, results also show that all PIIGS countries except for Greece converge to EU-10 in terms of budget deficit-to-GDP ratio.
366

CLUSTERING AND VISUALIZATION OF GENOMIC DATA

Sutharzan, Sreeskandarajan 26 July 2019 (has links)
No description available.
367

Využití neuronových sítí v klasifikaci srdečních onemocnění / Use of neural networks in classification of heart diseases

Skřížala, Martin January 2008 (has links)
This thesis discusses the design and the utilization of the artificial neural networks as ECG classifiers and the detectors of heart diseases in ECG signal especially myocardial ischaemia. The changes of ST-T complexes are the important indicator of ischaemia in ECG signal. Different types of ischaemia are expressed particularly by depression or elevation of ST segments and changes of T wave. The first part of this thesis is orientated towards the theoretical knowledges and describes changes in the ECG signal rising close to different types of ischaemia. The second part deals with to the ECG signal pre-processing for the classification by neural network, filtration, QRS detection, ST-T detection, principal component analysis. In the last part there is described design of detector of myocardial ischaemia based on artificial neural networks with utilisation of two types of neural networks back – propagation and self-organizing map and the results of used algorithms. The appendix contains detailed description of each neural networks, description of the programme for classification of ECG signals by ANN and description of functions of programme. The programme was developed in Matlab R2007b.
368

Rozpoznávání a klasifikace emocí na základě analýzy řeči / Emotional State Recognition and Classification Based on Speech Signal Analysis

Černý, Lukáš January 2010 (has links)
The diploma thesis focuses on classification of emotions. Thesis deals about parameterization of sounds files by suprasegment and segment methods with regard for next used of these methods. Berlin database is used. This database includes many of sounds records with emotions. Parameterization creates files, which are divided to two parts. First part is used for training and second part is used for testing. Point of interest is self-organization network. Thesis includes Matlab´s program which can be used for parameterization of any database. Data are classified by self-organization network after parameterization. Results of hits rates are presented at the end of this diploma thesis.
369

Agile Practices in Production Development : Investigation of how agile practices may be applied in a production development context and what the expected effects are.

Anderzon, Samuel, Davidsson, Filip January 2021 (has links)
Globalization has continuously brought an increased competition among companies, which entails a need for faster and more frequent deliveries of new products. Traditional project management methods, such as stage-gate and waterfall, are commonly used in production development projects and builds on a sequential approach. These methods have proven to have some disadvantages in flexibility, long lead times and it often creates communication barriers between the actors at each stage. The software industry has already encountered these obstacles and responded by introducing agile project management. Which improves the adaptability and allow changes to be made, due to new requirements from stakeholders or customers, throughout the entire development process. However, it remains unknown how agile models can improve production development. The purpose of this study was therefore to investigate how agile models can be applied to production development and what the effects are.  The authors have performed a case study at eight different companies within the automotive industry. The purpose of it has been to gain a deeper understanding about the case companies current production development processes and review how familiar the organizations are with the concept of agile project management. The extraction of the empirical data was conducted by questionnaires, interviews, and document reviews. An analyzation was done by comparing the empirical findings with the theoretical background out of eleven different categories that relates to project management (e.g., project goals, process, customer integration etc.). The analyzation concluded that the case company exclusively conducts their production development project by using a sequential approach.  The analyzation and the eleven categories where, together with the theoretical background about agile project management, later used to create the result by brainstorming different practices to become more agile. The results are presented out of three different scenarios, depending how agile the companies would like to be. For instance, are two process models suggested, one that is completely agile and one that is a hybrid of an agile and a stage-gate. Furthermore, are the implementation of self-organized teams, holistic approach towards internal and external partners, and reduced demand for documentation some of the practices that are suggested. Additionally, are three considerable aspects for the implementation presented.  The expected outcome and effects of applying these practices are discussed in the final chapter. Some of these outcomes are a company culture that will attract and retain talented personnel, where shared responsibilities and authorities triggers the employees to an increased commitment and sense of ownership towards their projects. Furthermore, are the companies expected to experience a more flexible and responsive approach towards conducting production development projects with a high focus on customer requirements and creating customer value.
370

Combining Multivariate Statistical Methods and Spatial Analysis to Characterize Water Quality Conditions in the White River Basin, Indiana, U.S.A.

Gamble, Andrew Stephan 25 February 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This research performs a comparative study of techniques for combining spatial data and multivariate statistical methods for characterizing water quality conditions in a river basin. The study has been performed on the White River basin in central Indiana, and uses sixteen physical and chemical water quality parameters collected from 44 different monitoring sites, along with various spatial data related to land use – land cover, soil characteristics, terrain characteristics, eco-regions, etc. Various parameters related to the spatial data were analyzed using ArcHydro tools and were included in the multivariate analysis methods for the purpose of creating classification equations that relate spatial and spatio-temporal attributes of the watershed to water quality data at monitoring stations. The study compares the use of various statistical estimates (mean, geometric mean, trimmed mean, and median) of monitored water quality variables to represent annual and seasonal water quality conditions. The relationship between these estimates and the spatial data is then modeled via linear and non-linear multivariate methods. The linear statistical multivariate method uses a combination of principal component analysis, cluster analysis, and discriminant analysis, whereas the non-linear multivariate method uses a combination of Kohonen Self-Organizing Maps, Cluster Analysis, and Support Vector Machines. The final models were tested with recent and independent data collected from stations in the Eagle Creek watershed, within the White River basin. In 6 out of 20 models the Support Vector Machine more accurately classified the Eagle Creek stations, and in 2 out of 20 models the Linear Discriminant Analysis model achieved better results. Neither the linear or non-linear models had an apparent advantage for the remaining 12 models. This research provides an insight into the variability and uncertainty in the interpretation of the various statistical estimates and statistical models, when water quality monitoring data is combined with spatial data for characterizing general spatial and spatio-temporal trends.

Page generated in 0.0735 seconds