• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Counting of Mostly Static People in Indoor Conditions

Khemlani, Amit A 09 November 2004 (has links)
The ability to count people from video is a challenging problem. The scientific challenge arises from the fact that although the task is pretty well defined, the imaging scenario is not well constrained. The background scene is uncontrolled. Lighting is complex and varying. And, image resolution, both in terms of spatial and temporal is usually poor, especially in pre-stored surveillance videos. Passive counting of people from video has many practical applications such as in monitoring the number of people sitting in front of a TV set, counting people in an elevator, counting people passing through a security door, and counting people in a mall. This has led to some research in automated counting of people. The context of most of the work in people counting is in counting pedestrians in outdoor settings or moving subjects in indoor settings. There is little work done in counting of people who are not moving around and very little work done in people counting that can handle harsh variations in illumination conditions. In this thesis, we explore a design that handles such issues at pixel level using photometry based normalization and at feature level by exploiting spatiotemporal coherence that is present in the change seen in the video. We have worked on home and laboratory dataset. The home dataset has subjects watching television and the laboratory dataset has subjects working. The design of the people counter is based on video data that is temporally sparsely sampled at 15 seconds of time difference between consecutive frames. Specific computer vision methods used involves image intensity normalization, frame to frame differencing, motion accumulation using autoregressive model and grouping in spatio-temporal volume. The experimental results show: The algorithm is less susceptible to lighting changes. Given an empty scene with just lighting change it usually produces a count of zero. It can count in varying illumination conditions. It can count people even if they are partially visible. Counts are generated for any moving objects in the scene. It does not yet try to distinguish between humans and non-humans. Counting errors are concentrated around frames with large motion events, such as a person moving out from a scene.
2

Vad styr företagens investeringar?En studie om hur förändringar i reporänta, makroekonomiska faktorer samt finansiella indikatorer påverkar investeringar hos svenska företag / What determines investments of firms?A study on how changes in the repo rate, macroeconomics factors and financial indicators affect investments of Swedish firms

Jansson, Emelie, Kapple, Linda January 2015 (has links)
Bakgrund: I november 2014 beslutade Riksbanken att ta steget mot en nollränta och i februari 2015 gick Riksbanken ut med ytterligare en sänkning till -0,10 procent. På så vis fick Sverige för första gången en negativ reporänta. Enligt makroekonomisk teori ska en sänkning av reporäntan stimulera konsumtion och investeringar i ekonomin. Huruvida reporäntan och dess räntesänkningar skapar förutsättningar för företag att investera är ett aktuellt och viktigt forskningsområde. Forskningen i ämnet är tunn på den svenska marknaden och således är forskningsbidraget från denna studie av betydelse.Syfte: Syftet med studien är att undersöka och analysera hur förändringar i reporänta, makro-ekonomiska faktorer samt finansiella indikatorer påverkar investeringar hos svenska företag.Genomförande: Studien bygger på en kvantitativ metod. En Vector Autoregressive model har skapats för att redogöra hur reporäntan, de makroekonomiska faktorerna och de finansiella indikatorerna påverkar företagens investeringar. För att möjliggöra en analys av dessa effekter har impulse response functions skattats i modellen. På så vis undersöks det hur en isolerad enhetsökning i de valda variablerna påverkar företagens investeringar över flera tidsperioder. För att genomföra en mer omfattande analys skattas tre modeller där den första tar hänsyn till både makroekonomiska faktorer och finansiella indikatorer. Den andra modellen exkluderar de finansiella indikatorerna och den tredje modellen speglar reporäntans utveckling i två olika tidsperioder.Resultat: Företagens investeringar påverkas av flertalet faktorer. En enhetsökning av utlåningsräntan, växelkursen och företagens inflationsförväntningar uppvisar ett signifikant negativt samband. En enhetsökning av BNP-tillväxten visar däremot ett signifikant positivt samband. Reporäntan visar ingen direkt effekt på investeringar i de första två modellerna. Däremot uppvisar reporäntan skillnader i den tredje modellen, där ett negativt samband förekommer i den första av de två observerade tidsperioderna. / Background: The central bank of Sweden decided in November 2014 to set the repo rate close to zero. Further they decided to lower the repo rate to -0,10 percent in February 2015. In regard to this, Sweden had a negative repo rate for the first time. According to macroeconomic theory a decrease in the repo rate is performed to stimulate an economy’s investments and consumptions. Whether or not a decrease in interest rates gives greater incentives for firms to invest is a topical subject and an important field of research. In addition to this, the existing research on the Swedish market is insufficient within this field, which gives us further motives to conduct this study.Aim: The purpose of this study is to examine and analyse how changes in the repo rate, macroeconomic factors and financial indicators affects investments of Swedish firms.Completion: The study is conducted with a quantitative approach. A Vector Autoregressive model is created in order to examine the impact of changes in the repo rate, the macroeconomic factors and the financial indicators on firms’ investments. Impulse response functions are estimated to allow a further analysis of these effects. Hence, it is conceivable to examine how one isolated unit-increase in a specific variable affects firms’ investment through several time periods. Furthermore, we estimate three models, one which includes both macroeconomic variables and financial indicators and another which excludes the financial indicators. The last model reflects the repo rate’s impact on investments in two separate time periods.Result: Investments of firms are affected by numerous of factors. One unit-increase of the lending rate, the exchange rate and firms’ expectations of inflation exhibit a negative relation to investments. Furthermore, one unit-increase in GDP-growth tends to increase investments. However, the repo rate has no impact on investments in the first two models. In spite of this, evidence from the third model indicates that the repo rate has a negative impact on investments during the first period.
3

Acurácia de previsões para vazão em redes: um comparativo entre ARIMA, GARCH e RNA

Duarte, Felipe Machado 29 August 2014 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-31T15:28:38Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Felipe Machado Duarte.pdf: 1439236 bytes, checksum: 970d1a4b49da9d4541eb167aa39a82fa (MD5) / Made available in DSpace on 2016-03-31T15:28:39Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Felipe Machado Duarte.pdf: 1439236 bytes, checksum: 970d1a4b49da9d4541eb167aa39a82fa (MD5) Previous issue date: 2014-08-29 / Em consequência da evolução da internet, causada por mudanças de paradigma como a Internet das coisas, por exemplo, surgem novas demandas tecnológicas por conta do crescimento do número de dispositivos conectados. Um dos novos desafios que vieram junto a esta demanda é gerenciar esta rede em expansão, de maneira a garantir conectividade aos dispositivos que a integram. Um dos aspectos que merecem atenção no gerenciamento da rede é o provisionamento da largura de banda, que deve ser realizado de maneira a evitar o desperdício de banda, sem por outro lado comprometer a conectividade ao restringi-la demais. No entanto, balancear esta equação não é uma tarefa simples, pois o tráfego de dados na rede é bastante complexo e exibe componentes, como a volatilidade, que tornam difícil a sua modelagem. Já há algum tempo, estudos são publicados apresentando a utilização de ferramentas de análise de séries temporais para prever a vazão de dados em redes de computadores, e entre as técnicas aplicadas com mais sucesso estão os modelos ARMA, GARCH e RNA. Embora estas técnicas tenham sido discutidas como alternativa para modelar dados de tráfego de redes, pouco material está disponível sobre a comparação de suas acurácias, de maneira que neste estudo foi proposta uma avaliação das acurácias dos modelos ARIMA, GARCH e RNA. Esta avaliação foi realizada em cenários configurados em diferentes granularidades de tempo e para múltiplos horizontes de previsão. Para cada um destes cenários foram ajustados modelos ARIMA, GARCH e RNA, e a validação das métricas de acurácia das previsões obtidas se deu através do Rolling Forecast Horizon. Os resultados obtidos mostraram que a RNA exibiu melhor acurácia em grande parte dos cenários propostos, chegando a exibir RMSE até 32% menor que as previsões geradas pelos modelos ARIMA e GARCH. No entanto, na presença de alta volatilidade, o GARCH conseguiu apresentar as previsões com melhor desempenho, exibindo RMSE até 29% menores que os outros modelos estudados. Os resultados deste trabalho servem de auxílio para a área de gerenciamento de redes, em especial a tarefa de provisionamento de largura de banda de tráfego, pois trazem mais informações sobre os desempenhos dos modelos ARIMA, GARCH e RNA ao gerar previsões para este tipo de tráfego. / The Internet evolution, caused by paradigm changes as the Internet of Things, fosters technological advances to cope with the rising number of connected devices. One of the new challenges that appeared with this new reality is the management of such expanding networks, assuring connectivity to every device within them. One of the major aspects of network management is bandwidth provisioning, which must be performed in a way to avoid bandwidth wasting, but without compromising connectivity by restricting it too much. Balancing such an equation is not a simple task, as network data traffic is very complex and presents property features, such as volatility, that turns its modeling rather difficult. It has been some time since research is published with the use of temporal analysis tools to predict data throughput in computer networks, among them, the most successful techniques employ the ARMA, GARCH and ANN models. Although these approaches have been discussed as alternatives do network data traffic modeling, there is little literature available concerning their accuracy, which motivated this work to perform an accuracy evaluation of the ARIMA, GARCH and ANN models. This evaluation was conducted in scenarios configured with different time granularities and for multiple forecast horizons. For each scenario, ARIMA, GARCH and ANN models were set, and the accuracy metrics evaluation was performed with a Rolling Forecast Horizon. Results show that ANN yielded better accuracy in most proposed scenarios, having a RMSE up to 32% lower than the forecasts generated by the ARIMA and GARCH models. However, when there is a high volatility, GARCH provided better forecasts, with a RMSE up to 29% lower than its counterparts. The results from this work provide a useful assistance to network management, especially to bandwidth provisioning, by shedding light on the accuracy presented by the ARIMA, GARCH and ANN models when generating forecasts for this type of traffic.
4

Channel Estimation and Power Control Algorithms in the Presence of Channel Aging

Bixing, Yan January 2023 (has links)
Power allocation algorithms that determine how much power should be allocated to pilot and data symbols play an important role in addressing the trade-off between accurate channel estimation and high high spectral efficiency for data symbols in the presence of time-varying fading channels. Dealing with this trade-off is highly non-trivial when the channel changes or ages rapidly in time. Specifically, channel aging renders the often used assumption that the channel parameters can be regarded constant between channel estimation instances invalid. Previous works have addressed the problem of the pilot spacing problem for Rayleigh fading channels. In this work, a power control algorithm is developed for both Rayleigh fading and Rician fading channels to deal with the above trade-off. Specifically, in this report, the uplink channel of a multi-user multiple input multiple output system is investigated. The fading channel is estimated by a suitable auto-regressive model using the associated auto-correlation function. Then the signal-to-interference-plus-noise ratio and spectral efficiency are calculated as a function of the power allocation ratio and other parameters of the communication network. The proposed power control algorithm is designed to find the upper bound of the spectral efficiency. As application examples, two uncrewed aerial vehicle networks are also modeled, in which the performance of the proposed power control algorithm is simulated to find how the parameters of the network will influence the algorithm results. Our investigation shows that the proposed power control algorithm performs well in the presence of fading communication channels and outperforms the benchmark case of equal power allocation between pilot and data symbols. / Effektallokeringsalgoritmen som bestämmer hur mycket effekt som ska allokeras till pilotsymboler och datasymboler är mycket viktig för att fånga avvägningen mellan korrekt kanaluppskattning och ett högt signal till störnings plus brusförhållande för en tidsvarierande fädning kanal. Tidigare arbete har löst problemet med pilotavstånds-problemet för Rayleigh fädning kanaler. I detta arbete genereras effektstyrnings-algoritmen för både Rayleigh fading och Rician fädning kanaler för att hantera avvägningen. I denna rapport genereras först en upplänkskanal för ett fleranvändarsystem med flera ingångar med flera utdata. Fädningskanalen uppskattas av den autoregressiva modellen med hjälp av autokorrelations funktionen. Sedan beräknas signal till interferens plus brusförhållandet och spektral effektivitet som en funktion av effekttilldelnings förhållandet och andra parametrar för kommunikationsnätverket. Effektstyrnings algoritmen är att hitta den övre gränsen för den spektrala effektiviteten. I detta arbete modelleras också två obemannade flygfordonsnätverk och prestanda för effektstyrningsalgoritmen simuleras också på dessa två modeller för att hitta hur nätverkets parametrar kommer att påverka algoritmresultaten.
5

Ambient Backscatter Communication Systems: Design, Signal Detection and Bit Error Rate Analysis

Devineni, Jaya Kartheek 21 September 2021 (has links)
The success of the Internet-of-Things (IoT) paradigm relies on, among other things, developing energy-efficient communication techniques that can enable information exchange among billions of battery-operated IoT devices. With its technological capability of simultaneous information and energy transfer, ambient backscatter is quickly emerging as an appealing solution for this communication paradigm, especially for the links with low data rate requirements. However, many challenges and limitations of ambient backscatter have to be overcome for widespread adoption of the technology in future wireless networks. Motivated by this, we study the design and implementation of ambient backscatter systems, including non-coherent detection and encoding schemes, and investigate techniques such as multiple antenna interference cancellation and frequency-shift backscatter to improve the bit error rate performance of the designed ambient backscatter systems. First, the problem of coherent and semi-coherent ambient backscatter is investigated by evaluating the exact bit error rate (BER) of the system. The test statistic used for the signal detection is based on the averaging of energy of the received signal samples. It is important to highlight that the conditional distributions of this test statistic are derived using the central limit theorem (CLT) approximation in the literature. The characterization of the exact conditional distributions of the test statistic as non-central chi-squared random variable for the binary hypothesis testing problem is first handled in our study, which is a key contribution of this particular work. The evaluation of the maximum likelihood (ML) detection threshold is also explored which is found to be intractable. To overcome this, alternate strategies to approximate the ML threshold are proposed. In addition, several insights for system design and implementation are provided both from analytical and numerical standpoints. Second, the highly appealing non-coherent signal detection is explored in the context of ambient backscatter for a time-selective channel. Modeling the time-selective fading as a first-order autoregressive (AR) process, we implement a new detection architecture at the receiver based on the direct averaging of the received signal samples, which departs significantly from the energy averaging-based receivers considered in the literature. For the proposed setup, we characterize the exact asymptotic BER for both single-antenna (SA) and multi-antenna (MA) receivers, and demonstrate the robustness of the new architecture to timing errors. Our results demonstrate that the direct-link (DL) interference from the ambient power source leads to a BER floor in the SA receiver, which the MA receiver can avoid by estimating the angle of arrival (AoA) of the DL. The analysis further quantifies the effect of improved angular resolution on the BER as a function of the number of receive antennas. Third, the advantages of utilizing Manchester encoding for the data transmission in the context of non-coherent ambient backscatter have been explored. Specifically, encoding is shown to simplify the detection procedure at the receiver since the optimal decision rule is found to be independent of the system parameters. Through extensive numerical results, it is further shown that a backscatter system with Manchester encoding can achieve a signal-to-noise ratio (SNR) gain compared to the commonly used uncoded direct on-off keying (OOK) modulation, when used in conjunction with a multi-antenna receiver employing the direct-link cancellation. Fourth, the BER performance of frequency-shift ambient backscatter, which achieves the self-interference mitigation by spatially separating the reflected backscatter signal from the impending source signal, is investigated. The performance of the system is evaluated for a non-coherent receiver under slow fading in two different network setups: 1) a single interfering link coming from the ambient transmission occurring in the shifted frequency region, and 2) a large-scale network with multiple interfering signals coming from the backscatter nodes and ambient source devices transmitting in the band of interest. Modeling the interfering devices as a two dimensional Poisson point process (PPP), tools from stochastic geometry are utilized to evaluate the bit error rate for the large-scale network setup. / Doctor of Philosophy / The emerging paradigm of Internet-of-Things (IoT) has the capability of radically transforming the human experience. At the heart of this technology are the smart edge devices that will monitor everyday physical processes, communicate regularly with the other nodes in the network chain, and automatically take appropriate actions when necessary. Naturally, many challenges need to be tackled in order to realize the true potential of this technology. Most relevant to this dissertation are the problems of powering potentially billions of such devices and enabling low-power communication among them. Ambient backscatter has emerged as a useful technology to handle the aforementioned challenges of the IoT networks due to its capability to support the simultaneous transfer of information and energy. This technology allows devices to harvest energy from the ambient signals in the environment thereby making them self-sustainable, and in addition provide carrier signals for information exchange. Using these attributes of ambient backscatter, the devices can operate at very low power which is an important feature when considering the reliability requirements of the IoT networks. That said, the ambient backscatter technology needs to overcome many challenges before its widespread adoption in IoT networks. For example, the range of backscatter is limited in comparison to the conventional communication systems due to self-interference from the power source at a receiver. In addition, the probability of detecting the data in error at the receiver, characterized by the bit error rate (BER) metric, in the presence of wireless multipath is generally poor in ambient backscatter due to double path loss and fading effects observed for the backscatter link. Inspired by this, the aim of this dissertation is to come up with new architecture designs for the transmitter and receiver devices that can improve the BER performance. The key contributions of the dissertation include the analytical derivations of BER which provide insights on the system design and the main parameters impacting the system performance. The exact design of the optimal detection technique for a communication system is dependent on the channel behavior, mainly the time-varying nature in the case of a flat fading channel. Depending on the mobility of devices and scatterers present in the wireless channel, it can either be described as time-selective or time-nonselective. In the time-nonselective channels, coherent detection that requires channel state information (CSI) estimation using pilot signals can be implemented for ambient backscatter. On the other hand, non-coherent detection is preferred when the channel is time-selective since the CSI estimation is not feasible in such scenarios. In the first part of this dissertation, we analyze the performance of ambient backscatter in a point-to-point single-link system for both time-nonselective and time-selective channels. In particular, we determine the BER performance of coherent and non-coherent detection techniques for ambient backscatter systems in this line of work. In addition, we investigate the possibility of improving the BER performance using multi-antenna and coding techniques. Our analyses demonstrate that the use of multi-antenna and coding can result in tremendous improvement of the performance and simplification of the detection procedure, respectively. In the second part of the dissertation, we study the performance of ambient backscatter in a large-scale network and compare it to that of the point-to-point single-link system. By leveraging tools from stochastic geometry, we analytically characterize the BER performance of ambient backscatter in a field of interfering devices modeled as a Poisson point process.
6

Bezkontaktní detekce fyziologických parametrů z obrazových sekvencí / Non-contact detection of physiological parameters from image sequences

Bršlicová, Tereza January 2015 (has links)
This thesis deals with the study of contactless and non-invasive methods for estimating heart and respiratory rate. Non-contact measurement involves sensing persons by using camera and the values of the physiological parameters are then assessed from the sets of image sequences by using suitable approaches. The theoretical part is devoted to description of the various methods and their implementation. The practical part describes the design and realization of the experiment for contactless detection of heart and respiratory rate. The experiment was carried out on 10 volunteers with a known heart and respiratory rate, which was covered by using of a sophisticated system BIOPAC. Processing and analysis of the measured data was conducted in software environment Matlab. Finally, results from contactless detection were compared with the reference from measurement system BIOPAC. Experiment results are statistically evaluated and discussed.
7

Statistical Designs for Network A/B Testing

Pokhilko, Victoria V 01 January 2019 (has links)
A/B testing refers to the statistical procedure of experimental design and analysis to compare two treatments, A and B, applied to different testing subjects. It is widely used by technology companies such as Facebook, LinkedIn, and Netflix, to compare different algorithms, web-designs, and other online products and services. The subjects participating in these online A/B testing experiments are users who are connected in different scales of social networks. Two connected subjects are similar in terms of their social behaviors, education and financial background, and other demographic aspects. Hence, it is only natural to assume that their reactions to online products and services are related to their network adjacency. In this research, we propose to use the conditional autoregressive model (CAR) to present the network structure and include the network effects in the estimation and inference of the treatment effect. The following statistical designs are presented: D-optimal design for network A/B testing, a re-randomization experimental design approach for network A/B testing and covariate-assisted Bayesian sequential design for network A/B testing. The effectiveness of the proposed methods are shown through numerical results with synthetic networks and real social networks.
8

Differential modulation and non-coherent detection in wireless relay networks

2014 January 1900 (has links)
The technique of cooperative communications is finding its way in the next generations of many wireless communication applications. Due to the distributed nature of cooperative networks, acquiring fading channels information for coherent detection is more challenging than in the traditional point-to-point communications. To bypass the requirement of channel information, differential modulation together with non-coherent detection can be deployed. This thesis is concerned with various issues related to differential modulation and non-coherent detection in cooperative networks. Specifically, the thesis examines the behavior and robustness of non-coherent detection in mobile environments (i.e., time-varying channels). The amount of channel variation is related to the normalized Doppler shift which is a function of user’s mobility. The Doppler shift is used to distinguish between slow time-varying (slow-fading) and rapid time-varying (fast-fading) channels. The performance of several important relay topologies, including single-branch and multi-branch dual-hop relaying with/without a direct link that employ amplify-and-forward relaying and two-symbol non-coherent detection, is analyzed. For this purpose, a time-series model is developed for characterizing the time-varying nature of the cascaded channel encountered in amplify-and-forward relaying. Also, for single-branch and multi-branch dual-hop relaying without a direct link, multiple-symbol differential detection is developed. First, for a single-branch dual-hop relaying without a direct link, the performance of two-symbol differential detection in time-varying Rayleigh fading channels is evaluated. It is seen that the performance degrades in rapid time-varying channels. Then, a multiple-symbol differential detection is developed and analyzed to improve the system performance in fast-fading channels. Next, a multi-branch dual-hop relaying with a direct link is considered. The performance of this relay topology using a linear combining method and two-symbol differential detection is examined in time-varying Rayleigh fading channels. New combining weights are proposed and shown to improve the system performance in fast-fading channels. The performance of the simpler selection combining at the destination is also investigated in general time-varying channels. It is illustrated that the selection combining method performs very close to that of the linear combining method. Finally, differential distributed space-time coding is studied for a multi-branch dual-hop relaying network without a direct link. The performance of this network using two-symbol differential detection in terms of diversity over time-varying channels is evaluated. It is seen that the achieved diversity is severely affected by the channel variation. Moreover, a multiple-symbol differential detection is designed to improve the performance of the differential distributed space-time coding in fast-fading channels.
9

Classificação automática de cardiopatias baseada em eletrocardiograma

Bueno, Nina Maria 30 October 2006 (has links)
This work is dedicated to study of the recognition and classification of cardiac disease, diagnosised through the electrocardiogram ECG. This examination is normally used in heart medical center, emergency, intensive therapy, and with complement diagnosis in heart disease as: acute myocardium infarction, bundle block branches, hypertrophy and others. The software was developed for support to the model, with focus on extraction of ECG signal characteristics, and an artificial neural network for recognition of diseases. For extraction these characteristics, we have used a auto-regressive model, AR, with the algorithm least mean square LMS, to minimize the minimum error. The neural network, with architecture multilayer perceptron and back propagation algorithm of training, was chosen for the recognition of the standards. The method was showed efficient. / Este trabalho dedica-se ao estudo do reconhecimento e classificação de cardiopatias, diagnosticadas através do exame de eletrocardiografia, ECG. Esse exame é comumente utilizado em visitas a cardiologistas, centros de emergência, centros de terapia intensiva e exames eletivos para auxílio de diagnóstico de cardiopatias como: infarto agudo do miocárdio, bloqueios de ramos, hipertrofia e outros. O aplicativo desenvolvido para apoio ao trabalho focaliza a extração de características do sinal ECG, representado por ciclos e a aplicação destas características a uma rede neural artificial para reconhecimento das cardiopatias. Para extração das características do sinal, utilizamos o modelo matemático de previsão de comportamento de curvas, chamado de auto-regressivo, AR, onde utilizamos o passado histórico recente da curva para determinar o próximo ponto; em nosso caso, utilizamos o algoritmo dos mínimos quadrados para adequação do erro, conhecido como LMS. A rede neural de topologia perceptron multicamadas e com algoritmo de treinamento backpropagation foi escolhida para o reconhecimento dos padrões, pela sua capacidade de generalização. O método se mostrou adequado e eficiente ao objetivo proposto. / Mestre em Ciências

Page generated in 0.0775 seconds