• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 91
  • 33
  • 32
  • 26
  • 16
  • 13
  • 13
  • 9
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 442
  • 92
  • 84
  • 82
  • 61
  • 50
  • 39
  • 37
  • 34
  • 33
  • 32
  • 32
  • 30
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

以White的真實性檢定與Stepwise Multiple Testing來檢驗技術分析在不同股票市場的獲利性 / Examining the profitability of technical analysis with white’s reality check and stepwise multiple testing in different stock markets

俞海慶, Yu, Hai Cing Unknown Date (has links)
在使用White的真實性檢定和Stepwise Multiple Test消除資料勘誤的問題之後,有些技術分析確實可以擊敗大盤,在1989到2008,DJIA, NASDAQ, S&P 500, NIKKEI 225, TAIEX這五個指數中。但是在較不成熟的市場或較過去的時間內,我沒辦法找到任何強烈的關係在這些市場與超額報酬間。還有學習策略通常沒辦法獲得比簡單策略更好的表現,代表使用過去最好的策略來預測未來並不是個好主意。我同時還發現在熊市比穩定的牛市更有可能擊敗買進持有的策略。 / In five indices, DJIA, NASDAQ, S&P 500, NIKKEI 225, TAIEX, from 1989 to 2008, some technical trading rules indeed can defeat the broad market even after using the White reality check and stepwise multiple test to solve the data snooping problem. But in the markets like less mature ones or the one which was in the older period, I can’t find a strong relation between these markets and the excess return in my research. And the learning strategy usually can’t have a better performance than the simple one, means applying the rule which had a best record to forecast the future may not be a good idea. I also found that it is more likely to beat the buy and hold strategy when there is a bear market but not a steady bull market.
182

Sequent calculi with an efficient loop-check for BDI logics / Sekvenciniai skaičiavimai BDI logikoms su efektyvia ciklų paieška

Birštunas, Adomas 02 March 2010 (has links)
Sequent calculi for BDI logics is a research object of the thesis. BDI logics are widely used for agent system description and implementation. Agents are autonomous systems, those acts in some environment and aspire to achieve preassigned goals. Implementation of the decision making is the main and the most complicated part in agent systems implementation. Logic calculi may be used for the decision making implementation. In this thesis, there are researched sequent calculi for BDI logics. Sequent calculi for BDI logics, like sequent calculi for other modal logics, use loop-check technique to get decidability. Inefficient loop-check takes a major part of the resources used for the derivation. For some modal logics, there are known loop-check free sequent calculi or calculi with an efficient loop-check. In this thesis, there is presented loop-check free sequent calculus for KD45 logic, which is the main fragment of the BDI logics. Introduced calculus not only eliminates loop-check, but also simplifies sequent derivation. For the branching time logic (another BDI logic fragment) there is presented sequent calculus with an efficient loop-check. Obtained results are adapted for creation sequent calculi for monoagent and multiagent BDI logics. Introduced calculi use only restricted loop-check. Moreover, loop-check is totally eliminated for some types of the loops. These results enables to create more efficient agent systems, those are based on the BDI logics. / Darbe nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikos yra plačiai naudojamos agentinių sistemų aprašymui ir realizavimui. Agentai yra autonomiškos sistemos, kurios veikia kažkurioje aplinkoje ir siekia įvykdyti iš anksto apibrėžtus tikslus. Sprendimų priėmimo realizavimas yra svarbiausia ir sudėtingiausia dalis realizuojant agentines sistemas. Sprendimo priėmimo realizavimui gali būti naudojami logikos skaičiavimai. Šiame darbe ir yra nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikose, kaip ir kitose modalumo logikose, yra naudojama ciklų paieška išsprendžiamumui gauti. Neefektyvi ciklų paieška užima didesnę išvedimų paieškos resursų dalį. Kai kurioms modalumo logikoms yra žinomi becikliai skaičiavimai ar skaičiavimai naudojantys efektyvią ciklų paiešką. Šiame darbe yra pateikiamas beciklis sekvencinis skaičiavimas KD45 logikai, kuri yra esminis BDI logikų fragmentas. Pateiktas skaičiavimas ne tik eliminuoja ciklų paiešką, bet ir supaprastina patį sekvencijos išvedimą. Skaidaus laiko logikai (kitam BDI logikų fragmentui) yra pateikiamas sekvencinis skaičiavimas naudojantis efektyvią ciklų paiešką. Gauti rezultatai yra pritaikyti sukuriant sekvencinius skaičiavimus vianaagentinei ir daugiaagentinei BDI logikoms. Pristatyti skaičiavimai naudoja tik apribotą ciklų paiešką. Be to, kai kurių tipų ciklus eliminuoja visiškai. Šie rezultatai įgalina kurti efektyvesnes agentines sistemas, paremtas BDI logikomis.
183

Sekvenciniai skaičiavimai BDI logikoms su efektyvia ciklų paieška / Sequent calculi with an efficient loop-check for BDI logics

Birštunas, Adomas 02 March 2010 (has links)
Darbe nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikos yra plačiai naudojamos agentinių sistemų aprašymui ir realizavimui. Agentai yra autonomiškos sistemos, kurios veikia kažkurioje aplinkoje ir siekia įvykdyti iš anksto apibrėžtus tikslus. Sprendimų priėmimo realizavimas yra svarbiausia ir sudėtingiausia dalis realizuojant agentines sistemas. Sprendimo priėmimo realizavimui gali būti naudojami logikos skaičiavimai. Šiame darbe ir yra nagrinėjami sekvenciniai skaičiavimai BDI logikoms. BDI logikose, kaip ir kitose modalumo logikose, yra naudojama ciklų paieška išsprendžiamumui gauti. Neefektyvi ciklų paieška užima didesnę išvedimų paieškos resursų dalį. Kai kurioms modalumo logikoms yra žinomi becikliai skaičiavimai ar skaičiavimai naudojantys efektyvią ciklų paiešką. Šiame darbe yra pateikiamas beciklis sekvencinis skaičiavimas KD45 logikai, kuri yra esminis BDI logikų fragmentas. Pateiktas skaičiavimas ne tik eliminuoja ciklų paiešką, bet ir supaprastina patį sekvencijos išvedimą. Skaidaus laiko logikai (kitam BDI logikų fragmentui) yra pateikiamas sekvencinis skaičiavimas naudojantis efektyvią ciklų paiešką. Gauti rezultatai yra pritaikyti sukuriant sekvencinius skaičiavimus vianaagentinei ir daugiaagentinei BDI logikoms. Pristatyti skaičiavimai naudoja tik apribotą ciklų paiešką. Be to, kai kurių tipų ciklus eliminuoja visiškai. Šie rezultatai įgalina kurti efektyvesnes agentines sistemas, paremtas BDI logikomis. / Sequent calculi for BDI logics is a research object of the thesis. BDI logics are widely used for agent system description and implementation. Agents are autonomous systems, those acts in some environment and aspire to achieve preassigned goals. Implementation of the decision making is the main and the most complicated part in agent systems implementation. Logic calculi may be used for the decision making implementation. In this thesis, there are researched sequent calculi for BDI logics. Sequent calculi for BDI logics, like sequent calculi for other modal logics, use loop-check technique to get decidability. Inefficient loop-check takes a major part of the resources used for the derivation. For some modal logics, there are known loop-check free sequent calculi or calculi with an efficient loop-check. In this thesis, there is presented loop-check free sequent calculus for KD45 logic, which is the main fragment of the BDI logics. Introduced calculus not only eliminates loop-check, but also simplifies sequent derivation. For the branching time logic (another BDI logic fragment) there is presented sequent calculus with an efficient loop-check. Obtained results are adapted for creation sequent calculi for monoagent and multiagent BDI logics. Introduced calculi use only restricted loop-check. Moreover, loop-check is totally eliminated for some types of the loops. These results enables to create more efficient agent systems, those are based on the BDI logics.
184

FPGA-Based LDPC Coded Modulations for Optical Transport Networks

Zou, Ding, Zou, Ding January 2017 (has links)
Current coherent optical transmission systems focus on single carrier solutions for 400Gb/s serial transmission to support traffic growth in fiber-optics communications, together with a few subcarriers multiplexed solutions for the 1 Tb/s serial data rates and beyond. With the advancement of analog-to-digital converter technologies, high order modulation formats up to 64-QAM with symbol rate up to 72Gbaud have been demonstrated experimentally with Raman amplification. To enable such high serial data rates, it is highly desirable to implement in hardware low complexity digital signal processing schemes and advanced forward error correction coding with powerful error correction capability. In this dissertation, to enable high-speed optical communications, we first proposed an efficient FPGA architecture of high-performance binary and non-binary LDPC engines that can support throughputs of multiple Gb/s, which have low power consumption, providing high net coding gains at a target bit-error rate of 10-15. Further, we implement a generalized LDPC coding based rate adaptive binary LDPC coding scheme and puncturing based rate adaptive non-binary LDPC coding scheme, where large number of parameters can be reconfigured in order to cope with the time-varying optical channel conditions and service requirements. Based on comprehensive analysis on complexity, latency, and power consumption we demonstrate that the proposed efficient implementation represents a feasible solution for the next generation optical communication networks. Additionally, we investigate the FPGA implementation of rate adaptive regular LDPC coding combined with up to six high-order modulation formats and demonstrate high net coding gain performance and demonstrated a bit loading algorithm for irregular LDPC coding. Lastly, we present the real-time implementation of a direct detection OFDM transceiver with multi Giga symbols/s symbol rates in a back-to-back configuration.
185

Presidentské systémy / Presidential systems

Niklová, Dominika January 2013 (has links)
The topic of my study is presidential system. I have chosen this topic because I am interested in situation of Latin America countries. These countries have decided to follow establishment of The United States. Many authors affirm that presidentialism is occasion of instability in these countries. The thesis is composed of ten chapters, which are divided into subsection or other parts. At the beginning I introduce political systems in our society. And I chose one of them, presidential system, to analyze in details. Chapter Two is about history of creation Constitution of The United States. This part of history is important because this Constitution is model for countries, which have decided for presidentialism. In this stage I explain the term presidential system and its particular signs. Without knowledge about it we don't understand how this system function. And we can't confront it with other political systems. Chapter Three describes presidentialism and its different forms. In my work I describe presidentialism in Latin America and in The United States. In Europe, there are many countries, which after year 1991 have decided for presidentionalism. But there always are relics of communism. There absent elements of democracy. In some of this countries govern strong presidents and we mark them like...
186

輔助社群媒體打卡研究之分析工具研發 / Development of an analysis tool to facilitate check-in research in social media

梁芷瑄, Liang, Chih Hsuan Unknown Date (has links)
打卡(Check-in)是Facebook平台上使用者經常使用的功能之一。過去關於打卡的研究大多採用質化的方法,而質性研究者在訪談使用者之前,為了解使用者的資料,往往需要手動查看使用者的Facebook塗鴉牆,收集、整理資料,費時費力。另外,使用者在Facebook上的打卡方式具多樣變化,許多的打卡不具有即時性與適地性。為了了解使用者的打卡動機,我們尚需取得使用者在手機上的操作與位置等資訊,方能還原打卡時的情境。為了了解打卡研究者的研究歷程與需求,我們讓質化研究先行,以進行需求分析,再依其開發一個協助研究者的分析工具,收集整理來自Facebook打卡資料與手機Log的資料,透過視覺化與列表的方式呈現,提供研究者能快速、深入分析、探索使用者在Facebook上打卡的行為與動機。本實驗邀請5位受試者扮演打卡研究分析者的角色,透過系統教學讓受試者學習使用系統,最後讓受試者自由探索使用者的資料集,並記錄下探索歷程與發現。實驗的評分結果使用5分量表,有用性向度的平均分數為4.6,受試者認為本系統能協助他們分析使用者的打卡行為與進行後續研究;易用性向度的平均分數為4.1,系統的操作方式對有的受試者需要時間來學習,但大部份受試者仍對本系統的易用性表示同意,證明本系統兼具有用性與易用性。另外,我們也發現受試者在探索過程與探索結果中展現了對此系統的創用性,是在設計者的預期之外,可見本系統的工具本質。 / Check-in is one of the functions that users often use on social media systems such as Facebook. In the past, most of the researchers about check-in use qualitative methods. Before interviewing users, qualitative researchers often need to manually check the user's timeline on Facebook to collect and sort out data, which is time consuming. In addition, the ways of a user's check-in’s on Facebook are diversified. Many check-in’s do not synchronize in time and place. To understand the motivation of a user in doing check-in’s, we need to collect more data such as the actions and location on user's mobile phone to restore the context. In order to assess the need of researchers, we use an interdisciplinary approach, in which qualitative research and tool development run in parallel. We develop a visual analysis tool aiming to help qualitative researchers to analysis data. Our tool collects check-in data from Facebook and synchronizes them with the user log on the user’s mobile phone. By visualizing the data in various forms such as bubble chart, timeline, map, and table, we hope to provide researchers with a quick overview of user behavior as well as in-depth studies of specific events. We use a 5-point scale to evaluate the system in our experiment. The average score of usefulness is 4.6, and the subjects think that our tool is very helpful for them to analyze check-in behaviors and the follow-up qualitative study. The average score of ease-of-use is 4.1. Although most of subjects agree that our system is easy to use, some think that practice is necessary to master the system. In addition, some subjects have found creative uses of our system that were not thought of by the designer, which reflects the nature of our tool in data exploration.
187

Uma abordagem de dígitos verificadores e códigos corretores no ensino fundamental / An approach to check digits and error-correcting codes in middle school

Machado, Daniel Alves 19 May 2016 (has links)
Este trabalho, elaborado por meio de pesquisa bibliográfica, apresenta um apanhado sobre os dígitos verificadores presentes no Cadastro de Pessoas Físicas (CPF), no código de barras, e no sistema ISBN; faz uma introdução sobre a métrica de Hamming e os códigos corretores de erros; cita a classe de códigos mais utilizada, que são os códigos lineares, e deixa a sugestão de uma proposta pedagógica para professores de matemática aplicarem no Ensino Fundamental, podendo ser ajustada também para o Ensino Médio. No apêndice A, são propostos alguns exercícios que podem ser trabalhados com os alunos em sala de aula. / This work, based on the attached references, presents an overview of the check digits that appear in the Brazilian document CPF, in the bar code and the ISBN system. Moreover, it makes an introduction to the Hamming metric and error-correcting codes. In particular, some considerations about linear codes are done and it makes a suggestion of a pedagogical approach to apply it in middle school and can also be adjusted to high school. In the Appendix A are proposed some exercises to students.
188

Conception du décodeur NB-LDPC à débit ultra-élevé / Design of ultra high throughput rate NB-LDPC decoder

Harb, Hassan 08 November 2018 (has links)
Les codes correcteurs d’erreurs Non-Binaires Low Density Parity Check (NB-LDPC) sont connus pour avoir de meilleure performance que les codes LDPC binaires. Toutefois, la complexité de décodage des codes non-binaires est bien supérieure à celle des codes binaires. L’objectif de cette thèse est de proposer de nouveaux algorithmes et de nouvelles architectures matérielles de code NB-LDPC pour le décodage des NBLDPC. La première contribution de cette thèse consiste à réduire la complexité du nœud de parité en triant en amont ses messages d’entrées. Ce tri initial permet de rendre certains états très improbables et le matériel requis pour les traiter peut tout simplement être supprimé. Cette suppression se traduit directement par une réduction de la complexité du décodeur NB-LDPC, et ce, sans affecter significativement les performances de décodage. Un modèle d’architecture, appelée "architecture hybride" qui combine deux algorithmes de l’état de l’art ("l’Extended Min Sum" et le "Syndrome Based") a été proposé afin d’exploiter au maximum le pré-tri. La thèse propose aussi de nouvelles méthodes pour traiter les nœuds de variable dans le contexte d’une architecture pré-tri. Différents exemples d’implémentations sont donnés pour des codes NB-LDPC sur GF(64) et GF(256). En particulier, une architecture très efficace de décodeur pour un code de rendement 5/6 sur GF(64) est présentée. Cette architecture se caractérise par une architecture de check node nœud de parité entièrement parallèle. Enfin, une problématique récurrente dans les architectures NB-LDPC, qui est la recherche des P minimums parmi une liste de taille Ns, est abordée. La thèse propose une architecture originale appelée first-then-second minimum pour une implantation efficace de cette tâche. / The Non-Binary Low Density Parity Check (NB-LDPC) codes constitutes an interesting category of error correction codes, and are well known to outperform their binary counterparts. However, their non-binary nature makes their decoding process of higher complexity. This PhD thesis aims at proposing new decoding algorithms for NB-LDPC codes that will be shaping the resultant hardware architectures expected to be of low complexity and high throughput rate. The first contribution of this thesis is to reduce the complexity of the Check Node (CN) by minimizing the number of messages being processed. This is done thanks to a pre-sorting process that sorts the messages intending to enter the CN based on their reliability values, where the less likely messages will be omitted and consequently their dedicated hardware part will be simply removed. This reliability-based sorting enabling the processing of only the highly reliable messages induces a high reduction of the hardware complexity of the NB-LDPC decoder. Clearly, this hardware reduction must come at no significant performance degradation. A new Hybrid architectural CN model (H-CN) combining two state-of-the-art algorithms - Forward-Backward CN (FB-CN) and Syndrome Based CN (SB-CN) - has been proposed. This hybrid model permits to effectively exploit the advantages of pre-sorting. This thesis proposes also new methods to perform the Variable Node (VN) processing in the context of pre-sorting-based architecture. Different examples of implementation of NB-LDPC codes defined over GF(64) and GF(256) are presented. For decoder to run faster, it must become parallel. From this perspective, we have proposed a new efficient parallel decoder architecture for a 5/6 rate NB-LDPC code defined over GF(64). This architecture is characterized by its fully parallel CN architecture receiving all the input messages in only one clock cycle. The proposed new methodology of parallel implementation of NB-LDPC decoders constitutes a new vein in the hardware conception of ultra-high throughput rate decoders. Finally, since the NB-LDPC decoders requires the implementation of a sorting function to extract P minimum values among a list of size Ns, a chapter is dedicated to this problematic where an original architecture called First-Then-Second-Extrema-Selection (FTSES) has been proposed.
189

Méthodologie d’élaboration d’un bilan de santé de machines de production pour aider à la prise de décision en exploitation : application à un centre d’usinage à partir de la surveillance des composants de sa cinématique / Machine health check methodology to help maintenance in operational condition : application to machine tool from its kinematic monitoring

Laloix, Thomas 11 December 2018 (has links)
Ce travail de thèse a été initié par Renault, en collaboration avec le Centre de Recherche en Automatique de Nancy (CRAN), dans le but de poser les bases d'une méthodologie générique permettant d'évaluer l'état de santé de moyens de production. Cette méthodologie est issue d’une réflexion conjointe machine - produit en lien avec les exigences industrielles. La méthodologie proposée est basée sur une approche PHM (Prognostics and Health Management) et est structurée en cinq étapes séquentielles. Les deux premières étapes sont développées dans ce travail de thèse et en constituent les contributions scientifiques majeures. La première originalité représente la formalisation des connaissance issues de la relation machine-produit. Cette connaissance est basée sur l'extension de méthodes existantes telle que l’AMDEC et l’HAZOP. La formalisation des concepts de connaissance et de leurs interactions est matérialisée au moyen d'une méta-modélisation basée sur une modélisation UML (Unified Modelling Language). Cette contribution conduit à l'identification de paramètres pertinents à surveiller, depuis le niveau du composant jusqu'au niveau de la machine. Ces paramètres servent ensuite d’entrée au processus d'élaboration du bilan de santé machine, qui représente la deuxième originalité de la thèse. L'élaboration de ces indicateurs de santé est basée sur des méthodes d’agrégation, telle que l'intégrale de Choquet, soulevant la problématique de l'identification des capacités. De cette façon, il est proposé un modèle global d'optimisation de l'identification des capacités multi-niveaux du système à travers l’utilisation d’Algorithmes Génétiques. La faisabilité et l'intérêt d'une telle démarche sont démontrés sur le cas de la machine-outil située à l'usine RENAULT de Cléon / This PhD work has been initiated by Renault, in collaboration with Nancy Research Centre in Automatic Control (CRAN), with the aim to propose the foundation of a generic PHM-based methodology leading to machine health check regarding machine-product joint consideration and facing industrial requirements. The proposed PHM-based methodology is structured in five steps. The two first steps are developed in this PhD work and constitute the major contributions. The first originality represents the formalization of machine-product relationship knowledge based on the extension of well-known functioning/dysfunctioning analysis methods. The formalization is materialized by means of meta-modelling based on UML (Unified Modelling Language). This contribution leads to the identification of relevant parameters to be monitored, from component up to machine level. These parameters serve as a basis of the machine health check elaboration. The second major originality of the thesis aims at the definition of health check elaboration principles from the previously identified monitoring parameters and formalized system knowledge. Elaboration of such health indicators is based on Choquet integral as aggregation method, raising the issue of capacity identification. In this way, it is proposed a global optimization model of capacity identification according to system multi-level, by the use of Genetic Algorithms. Both contributions are developed with the objective to be generic (not only oriented on a specific class of equipment), according to industrial needs. The feasibility and the interests of such approach are shown on the case of machine tool located in RENAULT Cléon Factory
190

SIMULAÇÃO DO PROCESSAMENTO DE PASSAGEIROS: CHECK POINT DO TERMINAL AEROPORTUÁRIO DE GOIÂNIA / SIMULATION OF PASSENGER PROCESSING: CHECK POINT OF THE AIRPORT TERMINAL OF GOIÂNIA

Figueiredo, Luiz Antonio 09 March 2017 (has links)
Submitted by admin tede (tede@pucgoias.edu.br) on 2017-06-29T18:12:20Z No. of bitstreams: 1 LUIZ ANTONIO FIGUEIREDO.pdf: 5501025 bytes, checksum: 98fb8b800c62427de5e93d93791120c3 (MD5) / Made available in DSpace on 2017-06-29T18:12:20Z (GMT). No. of bitstreams: 1 LUIZ ANTONIO FIGUEIREDO.pdf: 5501025 bytes, checksum: 98fb8b800c62427de5e93d93791120c3 (MD5) Previous issue date: 2017-03-09 / Due to the growth of the amount of air transport travelers around the world, it is necessary to deploy an efficient control of the passenger flow in the internal processes of the airport terminals. Like every public place, airport terminals have always been vulnerable to all kinds of crimes. In addition, terminal security concerns not only encompass the fight against acts of terrorism, but also involve problems that may adversely influence the operations of the airport. This work aims to present a model, based on computational simulation of passenger service process from a safety inspection of the airport terminal of Goiânia. It was develop a combined method, based on a quantitative method to deal with measurable elements with a qualitative method to assess the complexity of the problem analyzed. Through the simulation, using Flexsim software it was modelled the inspection system of an airport security, leading to conclusions that the characteristics of passengers directly influence the processing times in the system. In this regard, it was noted the lack of information regarding the procedures that passengers should adopt in relation to their belongings and clothing during the crossing of the airport security system. / Devido ao crescimento da quantidade de viajantes no transporte aéreo em todo o mundo, faz-se necessário implantar um controle eficiente do fluxo de passageiros nos processos internos dos terminais aeroportuários. Como todo local público, os terminais aeroportuários sempre são vulneráveis a todos os tipos de crimes. Além disso, as preocupações da segurança do terminal não englobam somente a luta contra atos de terrorismo, mas também problemas que podem influenciar negativamente as operações do aeroporto. Nesse contexto, este trabalho tem o objetivo de apresentar um modelo, baseado na simulação computacional, do processo de atendimento de passageiros do ponto de inspeção de segurança do terminal aeroportuário de Goiânia. Para atingir seus objetivos foi construído um método combinado, de natureza quantitativa para lidar com elementos mensuráveis do processo de inspeção de segurança, com uma análise qualitativa, destinada a avaliar mais profundamente a complexidade do problema. Através da simulação realizada no software Flexsim, foram obtidos resultados estatísticos de desempenho do sistema de inspeção de segurança do aeroporto, permitindo concluir que as características dos passageiros influenciam diretamente os tempos de processamento no sistema. Nesse sentido, foi constatada a falta de informações a respeito dos procedimentos que os passageiros devem adotar em relação aos seus pertences e vestimentas durante o atravessamento no processo de inspeção.

Page generated in 0.025 seconds