• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

High Level Power Estimation and Reduction Techniques for Power Aware Hardware Design

Ahuja, Sumit 14 June 2010 (has links)
The unabated continuation of the Moore's law has allowed the doubling of the number of transistors per unit area of a silicon die every 2 years or so. At the same time, an increasing demand on consumer electronics and computing equipments to run sophisticated applications has led to an unprecedented complexity of hardware designs. These factors have necessitated the abstraction level of design-entry of hardware systems to be raised beyond the Register-Transfer-Level (RTL) to Electronic System Level (ESL). However, power envelope on the designs due to packaging and other thermal limitations, and the energy envelope due to battery life-time considerations have also created a need for power/energy efficient design. The confluence of these two technological issues has created an urgent need for solving two problems: (i) How do we enable a power-aware design flow with a design entry point at the Electronic System Level? (ii) How do we enable power aware High Level Synthesis to automatically synthesize RTL implementation from ESL? This dissertation distinguishes itself by addressing the following two issues: (i) Since power/energy consumption of electronic systems largely depends on implementation details, and high-level models abstract away from such details, power/energy estimation at such levels has not been addressed thoroughly. (ii) A lot of work has been done in applying various techniques on control-data-flow graphs (CDFG) to find power/area/latency pareto points during behavioral synthesis. However, high level C-based functional models of various compute-intensive components, which could be easily synthesized as co-processors, have many opportunities to reduce power. Some of these savings opportunities are traditional such as clock-gating, operand-isolation etc. The exploration of alternate granularities of these techniques with target applications in mind, opens the door for traditional power reduction opportunities at the high-level. This work therefore concentrates on the aforementioned two areas of inadequacy of hardware design methodologies. Our proposed solutions include utilizing ESL simulation traces and mapping those to lower abstraction levels for power estimation, derivation of statistical power models using regression based learning for power estimation at early design stages, etc. On the HLS front, techniques that insert the power saving features during the synthesis process using exploration of granularity and scope of clock-gating, sequential clock-gating are proposed. Finally, this work shows how to marry two domains, that is estimation and reduction. In this regard, a power model is proposed, which helps in predicting power savings obtained using clock-gating and further guiding HLS to selectively insert clock-gating. / Ph. D.
162

FPGA Reservoir Computing Networks for Dynamic Spectrum Sensing

Shears, Osaze Yahya 14 June 2022 (has links)
The rise of 5G and beyond systems has fuelled research in merging machine learning with wireless communications to achieve cognitive radios. However, the portability and limited power supply of radio frequency devices limits engineers' ability to combine them with powerful predictive models. This hinders the ability to support advanced 5G applications such as device-to-device (D2D) communication and dynamic spectrum sharing (DSS). This challenge has inspired a wave of research in energy efficient machine learning hardware with low computational and area overhead. In particular, hardware implementations of the delayed feedback reservoir (DFR) model show promising results for meeting these constraints while achieving high accuracy in cognitive radio applications. This thesis answers two research questions surrounding the applicability of FPGA DFR systems for DSS. First, can a DFR network implemented on an FPGA run faster and with lower power than a purely software approach? Second, can the system be implemented efficiently on an edge device running at less than 10 watts? Two systems are proposed that prove FPGA DFRs can achieve these feats: a mixed-signal circuit, followed by a high-level synthesis circuit. The implementations execute up to 58 times faster, and operate at more than 90% lower power than the software models. Furthermore, the lowest recorded average power of 0.130 watts proves that these approaches meet typical edge device constraints. When validated on the NARMA10 benchmark, the systems achieve a normalized error of 0.21 compared to state-of-the-art error values of 0.15. In a DSS task, the systems are able to predict spectrum occupancy with up to 0.87 AUC in high noise, multiple input, multiple output (MIMO) antenna configurations compared to 0.99 AUC in other works. At the end of this thesis, the trade-offs between the approaches are analyzed, and future directions for advancing this study are proposed. / Master of Science / The rise of 5G and beyond systems has fuelled research in merging machine learning with wireless communications to achieve cognitive radios. However, the portability and limited power supply of radio frequency devices limits engineers' ability to combine them with powerful predictive models. This hinders the ability to support advanced 5G and internet-of-things (IoT) applications. This challenge has inspired a wave of research in energy efficient machine learning hardware with low computational and area overhead. In particular, hardware implementations of a low complexity neural network model, called the delayed feedback reservoir, show promising results for meeting these constraints while achieving high accuracy in cognitive radio applications. This thesis answers two research questions surrounding the applicability of field-programmable gate array (FPGA) delayed feedback reservoir systems for wireless communication applications. First, can this network implemented on an FPGA run faster and with lower power than a purely software approach? Second, can the network be implemented efficiently on an edge device running at less than 10 watts? Two systems are proposed that prove the FPGA networks can achieve these feats. The systems demonstrate lower power consumption and latency than the software models. Additionally, the systems maintain high accuracy on traditional neural network benchmarks and wireless communications tasks. The second implementation is further demonstrated in a software-defined radio architecture. At the end of this thesis, the trade-offs between the approaches are analyzed, and future directions for advancing this study are proposed.
163

High level waste system impacts from acid dissolution of sludge

Ketusky, Edward Thomas 31 March 2008 (has links)
Currently at the Savannah River Site (SRS), there are fifteen single-shell, 3.6-million liter tanks containing High Level Waste. To close the tanks, the sludge must be removed. Mechanical methods have had limited success. Oxalic acid cleaning is now being considered as a new technology. This research uses sample results and chemical equilibrium software to develop a preferred flowsheet and evaluate the acceptability of the system impacts. Based on modeling and testing, between 246,000 to 511,000 l of 8 wt% oxalic acid were required to dissolve a 9,000 liter Purex sludge heel. For SRS H-Area modified sludge, 322,000 to 511,000 l were required. To restore the pH of the treatment tank slurries, approximately 140,000 to 190,000 l of 50 wt% NaOH or 260,000 to 340,000 l of supernate were required. When developing the flowsheet, there were two primary goals to minimize downstream impacts. The first was to ensure that the Resultant oxalate solids were transferred to DWPF, without being washed. The second was to transfer the remaining soluble sodium oxalates to the evaporator drop tank, so they do not transfer through or precipitate in the evaporator pot. Adiabatic modeling determined the maximum possible temperature to be 73.5°C and the maximum expected temperature to be 64.6°C. At one atmosphere and at 73.5°C, a maximum of 770 l of water vapor was generated, while at 64.6°C a maximum 254 l of carbon dioxide were generated. Although tank wall corrosion was not a concern, because of the large cooling coil surface area, the corrosion induced hydrogen generation rate was calculated to be as high as 10,250 l/hr. Since the minimum tank purge exhaust was assumed to be 5,600 l/hr, the corrosion induced hydrogen generation rate was identified as a potential concern. Excluding corrosion induced hydrogen, trending the behavior of the spiked constituents of concern, and considering conditions necessary for ignition, energetic compounds were shown not to represent an increased risk Based on modeling, about 56,800 l of Resultant oxalates could be added to a washed sludge batch with minimal impact on the number of additional glass canisters produced. For each sludge batch, with 1 to 3 heel dissolutions, about 60,000 kg of sodium oxalate entered the evaporator system, with most collecting in the drop tank, where they will remain until eventual salt heel removal. For each 6,000 kg of sodium oxalate in the drop tank, about 189,000 l of Saltstone feed would eventually be produced. Overall, except for corrosion-induced hydrogen, there were no significant process impacts that would forbid the use of oxalic acid in cleaning High Level Waste tanks. / MATHEMATICAL SCIENCES / M. Tech. (Chemical Engineering)
164

SPACE COMMUNICATION DEMONSTRATION USING INTERNET TECHNOLOGY

Israel, Dave, Parise, Ron, Hogie, Keith, Criscuolo, Ed 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of Internet communication technologies to space communication. The goal is to provide global addressability and standard network protocols and applications for future space missions. It describes the communication architecture and operations concepts that will be deployed and tested on a Space Shuttle flight in July 2002. This is a NASA Hitchhiker mission called Communication and Navigation Demonstration On Shuttle (CANDOS). The mission will be using a small programmable transceiver mounted in the Shuttle bay that can communicate through NASA’s ground tracking stations as well as NASA’s space relay satellite system. The transceiver includes a processor running the Linux operating system and a standard synchronous serial interface that supports the High-level Data Link Control (HDLC) framing protocol. One of the main goals will be to test the operation of the Mobile IP protocol (RFC 2002) for automatic routing of data as the Shuttle passes from one contact to another. Other protocols to be utilized onboard CANDOS include secure login (SSH), UDP-based reliable file transfer (MDP), and blind commanding using UDP. The paper describes how each of these standard protocols available in the Linux operating system can be used to support communication with a space vehicle. It will discuss how each protocol is suited to support the range of special communication needs of space missions.
165

INTERNET TECHNOLOGY FOR FUTURE SPACE MISSIONS

Rash, James, Hogie, Keith, Casasanta, Ralph 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / Ongoing work at National Aeronautics and Space Administration Goddard Space Flight Center (NASA/GSFC), seeks to apply standard Internet applications and protocols to meet the technology challenge of future satellite missions. Internet protocols and technologies are under study as a future means to provide seamless dynamic communication among heterogeneous instruments, spacecraft, ground stations, constellations of spacecraft, and science investigators. The primary objective is to design and demonstrate in the laboratory the automated end-to-end transport of files in a simulated dynamic space environment using off-the-shelf, low-cost, commodity-level standard applications and protocols. The demonstrated functions and capabilities will become increasingly significant in the years to come as both earth and space science missions fly more sensors and the present labor-intensive, mission-specific techniques for processing and routing data become prohibitively. This paper describes how an IP-based communication architecture can support all existing operations concepts and how it will enable some new and complex communication and science concepts. The authors identify specific end-to-end data flows from the instruments to the control centers and scientists, and then describe how each data flow can be supported using standard Internet protocols and applications. The scenarios include normal data downlink and command uplink as well as recovery scenarios for both onboard and ground failures. The scenarios are based on an Earth orbiting spacecraft with downlink data rates from 300 Kbps to 4 Mbps. Included examples are based on designs currently being investigated for potential use by the Global Precipitation Measurement (GPM) mission.
166

Software test case generation from system models and specification : use of the UML diagrams and high level Petri nets models for developing software test cases

Alhroob, Aysh Menoer January 2010 (has links)
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
167

Modélisation, exploration et estimation de la consommation pour les architectures hétérogènes reconfigurables dynamiquement / Model, exploration and estimation of consumption in dynamically reconfigurable heterogeneous architectures

Bonamy, Robin 12 July 2013 (has links)
L'utilisation des accélérateurs reconfigurables, pour la conception de system-on-chip hétérogènes, offre des possibilités intéressantes d'augmentation des performances et de réduction de la consommation d'énergie. En effet, ces accélérateurs sont couramment utilisés en complément d'un (ou de plusieurs) processeur(s) pour permettre de décharger celui-ci (ceux-ci) des calculs intensifs et des traitements de flots de données. Le concept de reconfiguration dynamique, supporté par certains constructeurs de FPGA, permet d'envisager des systèmes beaucoup plus flexibles en offrant notamment la possibilité de séquencer temporellement l'exécution de blocs de calcul sur la même surface de silicium, réduisant alors les besoins en ressources d'exécution. Cependant, la reconfiguration dynamique n'est pas sans impact sur les performances globales du système et il est difficile d'estimer la répercussion des décisions de configuration sur la consommation d'énergie. L'objectif principal de cette thèse consiste à proposer une méthodologie d'exploration permettant d'évaluer l'impact des choix d'implémentation des différentes tâches d'une application sur un system-on-chip contenant une ressource reconfigurable dynamiquement, en vue d'optimiser la consommation d'énergie ou le temps d'exécution. Pour cela, nous avons établi des modèles de consommation des composants reconfigurables, en particulier les FPGAs, qui permettent d'aider le concepteur dans son design. À l'aide d'une méthodologie de mesure sur Virtex-5, nous montrons dans un premier temps qu'il est possible de générer des accélérateurs matériels de tailles variées ayant des performances temporelles et énergétiques diverses. Puis, afin de quantifier les coûts d'implémentation de ces accélérateurs, nous construisons trois modèles de consommation de la reconfiguration dynamique partielle. Finalement, à partir des modèles définis et des accélérateurs produits, nous développons un algorithme d'exploration des solutions d'implémentation pour un système complet. En s'appuyant sur une plate-forme de modélisation à haut niveau, celui-ci analyse les coûts d'implémentation des tâches et leur exécution sur les différentes ressources disponibles (processeur ou région configurable). Les solutions offrant les meilleures performances en fonction des contraintes de conception sont retenues pour être exploitées. / The use of reconfigurable accelerators when designing heterogeneous system-on-chip has the potential to increase performance and reduce energy consumption. Indeed, these accelerators are commonly a adjunct to one (or more) processor(s) and unload intensive computations and treatments. The concept of dynamic reconfiguration, supported by some FPGA vendors, allows to consider more flexible systems including the ability to sequence the execution of accelerators on the same silicon area, while reducing resource requirements. However, dynamic reconfiguration may impact overall system performance and it is hard to estimate the impact of configuration decisions on energy consumption.. The main objective of this thesis is to provide an exploration methodology to assess the impact of implementation choices of tasks of an application on a system-on-chip containing a dynamically reconfigurable resource, to optimize the energy consumption or the processing time. Therefore, we have established consumption models of reconfigurable components, particularly FPGAs, which assists the designer. Using a measurement methodology on Virtex-5, we first show the possibility to generate hardware accelerators of various sizes, execution time and energy consumption. Then, in order to quantify the implementation costs of these accelerators, we build three power models of the dynamic and partial reconfiguration. Finally, from these models, we develop an algorithm for the exploration of implementation and allocation possibilities for a complete system. Based on a high-level modeling platform, the implementation costs of the tasks and their performance on various resources (CPU or reconfigurable region) are analyzed. The solutions with the best characteristics, based on design constraints, are extracted.
168

Fast Code Exploration for Pipeline Processing in FPGA Accelerators / Exploração Rápida de Códigos para Processamento Pipeline em Aceleradores FPGA

Rosa, Leandro de Souza 31 May 2019 (has links)
The increasing demand for energy efficient computing has endorsed the usage of Field-Programmable Gate Arrays to create hardware accelerators for large and complex codes. However, implementing such accelerators involve two complex decisions. The first one lies in deciding which code snippet is the best to create an accelerator, and the second one lies in how to implement the accelerator. When considering both decisions concomitantly, the problem becomes more complicated since the code snippet implementation affects the code snippet choice, creating a combined design space to be explored. As such, a fast design space exploration for the accelerators implementation is crucial to allow the exploration of different code snippets. However, such design space exploration suffers from several time-consuming tasks during the compilation and evaluation steps, making it not a viable option to the snippets exploration. In this work, we focus on the efficient implementation of pipelined hardware accelerators and present our contributions on speeding up the pipelines creation and their design space exploration. Towards loop pipelining, the proposed approaches achieve up to 100× speed-up when compared to the state-uf-the-art methods, leading to 164 hours saving in a full design space exploration with less than 1% impact in the final results quality. Towards design space exploration, the proposed methods achieve up to 9:5× speed-up, keeping less than 1% impact in the results quality. / A demanda crescente por computação energeticamente eficiente tem endossado o uso de Field- Programmable Gate Arrays para a criação de aceleradores de hardware para códigos grandes e complexos. Entretanto, a implementação de tais aceleradores envolve duas decisões complexas. O primeiro reside em decidir qual trecho de código é o melhor para se criar o acelerador, e o segundo reside em como implementar tal acelerador. Quando ambas decisões são consideradas concomitantemente, o problema se torna ainda mais complicado dado que a implementação do trecho de código afeta a seleção dos trechos de código, criando um espaço de projeto combinatorial a ser explorado. Dessa forma, uma exploração do espaço de projeto rápida para a implementação de aceleradores é crucial para habilitar a exploração de diferentes trechos de código. Contudo, tal exploração do espaço de projeto é impedida por várias tarefas que consumem tempo durante os passos de compilação a análise, o que faz da exploração de trechos de códigos inviável. Neste trabalho, focamos na implementação eficiente de aceleradores pipeline em hardware e apresentamos nossas contribuições para o aceleramento da criações de pipelines e de sua exploração do espaço de projeto. Referente à criação de pipelines, as abordagens propostas alcançam uma aceleração de até 100× quando comparadas às abordagens do estado-da-arte, levando à economia de 164 horas em uma exploração de espaço de projeto completa com menos de 1% de impacto na qualidade dos resultados. Referente à exploração do espaço de projeto, as abordagens propostas alcançam uma aceleração de até 9:5×, mantendo menos de 1% de impacto na qualidade dos resultados.
169

Développement d'outils et de modèles CAO de haut niveau pour la simulation électrothermique de circuits mixtes en technologie 3D / CAD Tools and high level behavioral models dedicated to mixed-signal integrated circuits in 3D technology

Krencker, Jean-Christophe 23 November 2012 (has links)
Les travaux de cette thèse s’inscrivent dans un projet de grande envergure, le projet 3D-IDEAS, financé par l’ANR. Le but de ce projet est d’établir la chaîne complète de l’intégration de circuits en technologie 3D. Les densités de puissance dans ces circuits sont telles que les problèmes liés à la température – électromigration, désappariement des courants et tensions de polarisation, etc. – sont susceptibles de remettre en cause la conception du circuit. Le coût élevé de la fabrication de ces circuits oblige le concepteur à valider le comportement électrothermique des circuits préalablement à l’envoi en fabrication. Pour répondre à ce besoin, un simulateur électrothermique précis et fiable doit être à disposition. En outre, en raison de la complexité extrême de ces circuits, il est judicieux que ce simulateur soit compatible avec l’approche de modélisation haut niveau. L’objectif de cette thèse est de développer un tel simulateur. La solution proposée intègre ce simulateur dans un environnement de développement CAO pour circuit intégré standard, Cadence®. La contrainte sur la précision des résultats nous a amené à développer une nouvelle méthodologie spécifique à la modélisation électrothermique haut-niveau. Ce manuscrit comporte deux grandes parties. Dans la première, la démarche adoptée pour concevoir le simulateur est détaillée. Ensuite, dans la seconde partie, le fonctionnement du simulateur ainsi que la méthode de modélisation haut-niveau mise en place sont présentées, puis validées. / The work of this thesis is part of a larger project, the project 3D-IDEAS, funded by the ANR. The purpose of this project is to establish the complete chain of integrated circuits built upon 3D technology. Power densities in these circuits are exacerbated, thus problems related to temperature, such as electromigration, mismatch of bias currents and voltages, etc., arise and might have critical effects on the circuit behavior. The high cost of these circuits requires the designer to validate the electro-thermal behavior of circuits prior to manufacturing. To meet this need, an accurate and reliable electro-thermal simulator should be available. Moreover, due to the extreme complexity of these circuits, it is wise for such a simulator to be compliant with high level modeling approach. The objective of this thesis is to develop such a simulator. The proposed solution integrates the simulator in the broadly used CAD environment for integrated circuits Cadence®. The need of accurate results led us to develop a new methodology specific to high level electro-thermal modeling. This manuscript is split in two major parts. In the first one, the approach to implement the simulator is detailed. Then, in the second part, the operation principle of the simulator and the modeling method implementation are detailed and validated.
170

A construção de uma tutela administrativa de elevado nível de proteção ao consumidor a partir das liberdades da pessoa na dinâmica tecno-humanista fundada nos direitos da privacidade e da proteção de dados

Horn, Luiz Fernando Del Rio 20 March 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-07-30T13:11:16Z No. of bitstreams: 1 Luiz Fernando Del Rio Horn_.pdf: 4264890 bytes, checksum: 459bafe06edccce44985568ac7a7d5d2 (MD5) / Made available in DSpace on 2018-07-30T13:11:16Z (GMT). No. of bitstreams: 1 Luiz Fernando Del Rio Horn_.pdf: 4264890 bytes, checksum: 459bafe06edccce44985568ac7a7d5d2 (MD5) Previous issue date: 2018-03-20 / Nenhuma / A presente tese examinou as mudanças de ordem qualitativa na prática de consumo decorrente das recentes ondas tecnológicas, as quais gradativamente vêm comprometendo as liberdades do consumidor com acessos eletrônicos invasivos e invisíveis aos dados da pessoa singular, reveladores das predileções de compra, da determinação de correlações, das probabilidades e das edições de perfil. Na perspectiva dessa nova condição de sujeição do consumidor, elencou-se o seguinte problema: Considerando-se a dinâmica tecno-humanista na contemporaneidade, como promover, via administração pública, um elevado nível de proteção ao consumidor no Brasil, de modo a garantir as liberdades da pessoa a partir de dois direitos autônomos e interligados, então transcritos nos direitos da privacidade e da proteção de dados? Para tanto, adotou-se o método monográfico, cuja estratégia metodológica se embasa na hermenêutica filosófica e na fenomenologia hermenêutica, pautadas pelo proceder transdisciplinar para a composição da macrodisciplina de orientação dedutiva. Para assegurar fundamentação ao questionamento, elegeram-se as seguintes hipóteses: uma primeira, atrelada à revisão original da dimensão temporal da existência humana para declarar a contemporaneidade dentro da percepção dinâmica tecno-humanista transcrita na dualidade de forças entre o tecno (aparatos tecnológicos associados à racionalidade instrumental) e a ética social dos Direitos Humanos hodiernos; e outra, focada nos fundamentos da proteção digital do consumidor, a prever os direitos da privacidade e da proteção de dados como autônomos, ambos sobrelevados nas suas dimensões objetivas, de direito público administrativo e regidos pela tutela positiva, interligados pelo interesse comum de ampliação da proteção ao consumidor e da garantia das liberdades da pessoa. Contribuições teóricas e aplicadas inéditas, confirmadas no decorrer da pesquisa e culminadas na criação de um conjunto de ações de natureza administrativa para a proteção do consumidor, aplicáveis à rede estatal de Procons e extensivo a todo o SNDC, comportando o comércio eletrônico, o technical consumer products e o regime das comunicações não solicitadas - áreas de fronteiras ainda pendentes de absorção pelo Direito com vistas ao aumento do nível proteção do consumidor. / In the present thesis the qualitative changes in the consumption practice due to the recent technological waves have been verified, which are gradually compromising the freedoms of the consumer when of the electronic accesses invasive and invisible to the data of the singular person, revealing the predilections of purchase, determination of correlations, probabilities, and profile edits. Based on this new condition of subjection of the consumer, the following problem was mentioned: Considering the techno-humanist dynamics in contemporary times, how to promote, through public administration, a high level of consumer protection in Brazil, in order to guarantee the liberties of the person from two autonomous and interconnected rights, then transcribed in the rights of privacy and data protection? For that, the monographic method was adopted, having as methodological strategy the philosophical hermeneutics and the hermeneutic phenomenology guided by the transdiciplinary procedure for the composition of the macrodiscipline of deductive orientation. In order to ensure a foundation for questioning, the following hypotheses were raised: a first linked to the original revision of the temporal dimension of human existence to declare contemporaneity within a dynamic techno-humanist perception, transcribed in the duality of forces between techno (the technological apparatuses associated with instrumental rationality) and the social ethic of modern human rights. Another focused on the fundamentals of digital consumer protection, to predict the rights of privacy and data protection as autonomous, both raised in their objective dimensions, public administrative law and governed by positive tutelage, interconnected by the common interest of obtaining a greater consumer protection and guarantee of the freedoms of the person. Unpublished theoretical and applied contributions, confirmed during the course of the research and culminated in the creation of a set of actions of an administrative nature for the protection of the consumer, applicable to the Procons state network and extended to all SNDC, including electronic commerce, technical consumer products and the regime of unsolicited communications - border areas still pending absorption by law in order to increase the level of consumer protection.

Page generated in 0.067 seconds