• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 10
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Data Transfer System for Host Computer and FPGA Communication

Barnard, Michael T. January 2015 (has links)
No description available.
22

The impact of innovative ICT technologies on the power consumption and CO2 emission of HTTP servers

Soler Domínguez, Sebastian January 2022 (has links)
The ICT technologies and their adoption from the population are growing fast, and the energy that this industry requires has followed the same trend, even considering all the improvements in efficiency during the last decades. This is because the increment in data centers and information outpaces all the efficiencies that have been adopted over the years. The HTTP servers have been optimizing data usage performance over the years, however, data centers still consume more and more energy due to the high demand they have. The objective of this study is to develop a tool that compares cache and non-cache servers' energy, hence CO2 emissions performances, using a simple and an advanced model. The simple model is based on a compilation of extensive data analysis including more detailed information and inputs, and the advanced model considers an energy consumption comparison between cache and non-cache technology. A database of CO2 emissions per MWh of 49 countries is created that forecasts this rate until 2030. The results show that cache servers are between 20% and 5% more efficient than non-cache in terms of energy consumption for files under 5MB. However, the efficiency level varies depending on the file size that is transferred. Therefore, improved ICT technology has the potential to reduce thousands of tons of CO2 per year if more websites adopt it. For example, an average news website with 300k visits per day could reduce around 150 tonCO2/year. IKT-teknikerna och deras antagande från befolkningen växer snabbt, och den energi som denna industri kräver har följt samma trend, även med tanke på alla effektivitetsförbättringar under de senaste decennierna. Detta beror på att ökningen av datacenter och information överträffar alla effektivitetsvinster som har antagits under åren. HTTP-servrarna har optimerat dataanvändningsprestanda under åren, men datacenter förbrukar fortfarande mer och mer energi på grund av den höga efterfrågan de har. Syftet med denna studie är att utveckla ett verktyg som jämför cache- och icke-cache-servrars energi, därav CO2-utsläppsprestanda, med hjälp av en enkel och en avancerad modell. Den enkla modellen är baserad på en sammanställning av omfattande dataanalyser inklusive mer detaljerad information och indata, och den avancerade modellen tar hänsyn till en energiförbrukningsjämförelse mellan cache- och icke-cache-teknik. En databas med CO2-utsläpp per MWh för 49 länder skapas som prognostiserar denna takt fram till 2030. Resultaten visar att cacheservrar är mellan 20% och 5% effektivare än icke-cache vad gäller energiförbrukning för filer under 5MB. Effektivitetsnivån varierar dock beroende på filstorleken som överförs. Därför har förbättrad IKT-teknik potential att minska tusentals ton CO2 per år om fler webbplatser använder den. Till exempel kan en genomsnittlig nyhetswebbplats med 300 000 besök per dag minska cirka 150 ton CO2/år.
23

Exchanging and Protecting Personal Data across Borders: GDPR Restrictions on International Data Transfer

Oldani, Isabella 20 July 2020 (has links)
From the very outset of the EU data protection legislation, and hence from the 1995 Directive, international data transfer has been subject to strict requirements aimed at ensuring that protection travels with data. Although these rules have been widely criticized for their inability to deal with the complexity of modern international transactions, the GDPR has essentially inherited the same architecture of the Directive together with its structural limitations. This research aims to highlight the main weaknesses of the EU data export restrictions and identify what steps should be taken to enable a free, yet safe, data flow. This research first places EU data transfer rules in the broader debate about the challenges that the un-territorial cyberspace poses to States’ capabilities to exert their control over data. It then delves into the territorial scope of the GDPR to understand how far it goes in protecting data beyond the EU borders. The objectives underpinning data export restrictions (i.e., avoiding the circumvention of EU standards and protecting data from foreign public authorities) and their limitations in achieving such objectives are then identified. Lastly, three possible “solutions” for enabling data flow are tested. Firstly, it is shown that the adoption by an increasing number of non-EEA countries of GDPR-like laws and the implementation by many companies of GDPR-compliant policies is more likely to boost international data flow than internationally agreed standards. Secondly, the role that Article 3 GDPR may play in making data transfer rules “superfluous” is analysed, as well as the need to complement the direct applicability of the GDPR with cross-border cooperation between EU and non-EU regulators. Thirdly, the study finds that the principle of accountability, as an instrument of data governance, may boost international data flow by pushing most of the burden for ensuring GDPR compliance on organizations and away from resource-constrained regulators.
24

Design and Implementation of a MAC protocol for Wireless Distributed Computing

Bera, Soumava 28 June 2011 (has links)
The idea of wireless distributed computing (WDC) is rapidly gaining recognition owing to its promising potential in military, public safety and commercial applications. This concept basically entails distributing a computationally intensive task that one radio device is assigned, among its neighboring peer radio devices. The added processing power of multiple radios can be harnessed to significantly reduce the time consumed in obtaining the results of the original complex task. Since the idea of wireless distributed computing depends on a radio device forming a network with its peers, it is imperative and necessary to have a medium access control (MAC) protocol for such networks which is capable of scheduling channel access by multiple radios in the network, ensuring reliable data transfer, incorporating rate adaptation as well as handling link failures. The thesis presented here elaborates the design and implementation of such a MAC protocol for WDC employed in a practical network of radio devices configurable through software. It also brings to light the design and implementation constraints and challenges faced in this endeavor and puts forward viable solutions. / Master of Science
25

The difference in BIM component data requirements between prescriptive representations and actual practices

Kim, Suduck 12 August 2015 (has links)
Utilizing Building Information Modeling (BIM) for Facility Management (FM) can reduce interoperability costs during the Operations and Maintenance (OandM) phase by improving data management. However, there are technological, process related, and organizational barriers to successful implementation of BIM integrated FM (BIM-FM), and process related barriers might be solved by the use of BIM integrated FM (BIM-FM) guidelines. However, the guidelines need to be updated with lessons learned from actual practices in order to maintain their validity. In order to diagnose current practices and identify key differences between prescriptive representations and actual practices, this exploratory research compares BIM component data requirements between guidelines and actual practices at public higher education institutions in Virginia. The gap in BIM component data requirements between the guidelines and the actual practices may prevent successful implementation of BIM-FM. This research is composed of three parts: a synthesis of prescriptive representations, determination of actual data requirements in practice, and comparison of differences between guidelines and practices. Document analysis and case study via document analysis and in-person interviews were conducted to collect data. Then, direct comparison was conducted to test the research question. Though the researcher disapproved the established hypothesis of 'There would be some differences in BIM component data requirements between prescriptive representations and actual practices' due to the difference in level of information and details between prescriptive representations and actual practices, this exploratory research provides useful information. / Master of Science
26

Duomenų perdavimo spartos tyrimas judriojo ryšio tinkluose / Data Transfer Throughput Research over Mobile Networks

Žvinys, Karolis 23 July 2012 (has links)
Baigiamajame magistro darbe nagrinėjami su duomenų perdavimo sparta susiję ryšio kanalo parametrai. Pirmojoje darbo dalyje apžvelgiami užsienio ir Lietuvos mokslininkų atliekami tyrimai kanalo parametrų, susietų su duomenų perdavimo sparta, tematika, analizuojama mobilaus tinklo kanalo parametrų matavimams skirta iranga. Kituose darbo skyriuose išskiriami konkretūs su duomenų pralaidumu sąveikaujantys parametrai. Naudojant šiuos parametrus darbe kuriami modeliai, skirti duomenų perdavimo spartai prognozuoti tikrinamas modelių tinkamumas bei tikslumas. Tiesinės prognozės atveju pasiektas 77,83%, o netiesinės prognozės atveju – 76,19% tikėtinos duomenų perdavimo spartos prognozės tikslumas. Atsižvelgiant į vartotojų interesus siūlomi jų įrangai pritaikyti prognozės modeliai. Darbo pabaigoje tikrinamas sukurtų modelių adekvatumas realiomis ryšio salygomis. / This work analyzes communication channel settings of UMTS technology which are related with a data transfer throughput. Further course of study includes the most specific parameters selection, that arethe most crucial for data speed. Using these parameters it is developed the models suitable for data transfer throughput prediction. To build the model the linear and nonlinear forecasting methods are applied. The linear prediction is made by using linear regression, nonlinear – neural networks. Using linear prediction model 77.83% forecast accuracy has been achieved, while the nonlinear forecast expected transmission rate forecast accuracy is 76.19%.These prediction models accuracy obtained by using eight parameters of the communication channel. Finally in this paper are built the data throughput prediction models that allow to predict data speed using only standard terminal presented channel parameters. At the end all built prediction models are checked in real communication environment.
27

Accélération des calculs pour la simulation du laminage à pas de pèlerin en utilisant la méthode multimaillages / Speeding-up numerical simulation of cold pilgering using a mutlimesh method

Kpodzo, Koffi Woloe 18 March 2014 (has links)
Ce travail vise à accélérer les calculs lors de la simulation numérique du laminage à pas de pèlerin. Pour ce faire, nous nous sommes penchés sur la méthode Multimaillages Multiphysiques Parallèle (MMP) implémentée au sein du code Forge®, et destinée à accélérer les calculs pour des procédés de mise en forme à faible où la déformation est très localisée sur une petite zone du domaine. Dans cette méthode, un maillage localement raffiné dans la zone de déformation et plus grossier sur le reste du domaine est utilisé pour résoudre les équations mécaniques, alors qu'un maillage uniformément raffiné est retenu pour le calcul thermique. Le calcul mécanique étant le plus coûteux, la réduction du nombre de nœuds de son maillage permet d'obtenir des accélérations très importantes. Le maillage de calcul thermique étant uniformément fin, il sert aussi de maillage de stockage pour les champs calculés pour les deux physiques. Pour appliquer la efficacement méthode MMP au laminage à pas de pèlerin, plusieurs aspects importants ont été pris en compte. Tout d'abord la géométrie complexe du tube nécessite le développement d'une technique de déraffinement spéciale afin d'assurer un déraffinement maximal tout en garantissant un maillage convenable pour des calculs. Une technique de déraffinement de maillage utilisant une métrique anisotrope cylindrique a été alors développée. Ensuite, avec la loi de comportement élastoplastique utilisée, des perturbations importantes sont observées sur les contraintes dues aux diffusions numériques engendrées par les différents types de transports des champs P0 (constants du maillage thermique vers le maillage mécanique. Pour y remédier, une approche combinant deux techniques a été développée. La première consiste à effectuer la réactualisation des variables d'état directement sur le maillage mécanique plutôt que sur le maillage thermique et de les transporter ensuite. La deuxième technique est l'utilisation d'un opérateur de transport P0 basé sur un recouvrement super convergent (SPR) et la construction de champs d'ordre supérieur recouvrés. De bonnes accélérations sont alors obtenues sur les cas de laminage à pas de pèlerin étudiés, allant jusqu'à un facteur 6,5 pour la résolution du problème thermomécanique. Les accélérations globales de simulation vont jusqu'à un facteur 3,3 sur un maillage contenant environ 70 000 nœuds en séquentiel. En parallèle les performances chutent légèrement, mais elles restent semblables (2,7). / This work aims at speeding-up the calculations of the numerical simulation of the cold pilgering process. To this end, it is focused on a Parallel Multiphysics Multimesh (MMP) method that bas been implemented in the Forge® code; this method is dedicated to speeding-up the calculations for processes in which the deformation is localized within a small area of the computational domain. A locally refined mesh is used to solve the mechanical equations while a uniformly refined mesh serves as basic mesh to store state variables and is preferred for thermal calculations. The mechanical computations being the most expensive, reducing the number of nodes of its mesh provides high speed-ups. To effectively apply MMP method to cold pilgering process, many important aspects have been taken into account. Firstly the complex geometry of the tube requires the development of a special mesh coarsening technique, in order to ensure a maximum coarsening while guaranteeing a suitable mesh for calculations. A technique using a cylindrical anisotropic metric is then introduced. Afterwards, with the elastoplastic behaviour law used for the considered process, inaccuracies were observed on the stress field. They are mainly due to the numerical diffusion generated by the different transfers operations of P0 variables (constant per element) from the thermal mesh to the mechanical one. To remedy this issue, an approach combining two techniques has been developed. Firstly state variables are directly updated on the mechanical mesh, instead of doing it on the thermal mesh before transferring them on the the mechanical mesh. The second technique consists in using a P0 transfer operator based on super convergent recovery (SPR) technique, which improves the accuracy of the transported field through introduction of higher order recovered fields. High speed-ups are obtained on the studied cold pilgering cases, up to a factor 6,5 for the resolution of the thermomechanical problem, and the global simulation speed-up is up to a factor of 3,2, on a mesh with about 70 000 nodes in sequential calculations. For parallel calculations performances slightly drop but remain quite good.
28

Systém zabezpečeného přenosu a zpracování dat z aktigrafu / System of secured actigraph data transfer and processing

Mikulec, Marek January 2020 (has links)
The new Health 4.0 concept brings the idea of combining modern technologies from field of science and technology with research in healthcare and medicine. This work realizes a system of secured actigraph data transfer and preprocessing based on the concept of Health 4.0. The system is successfully designed, implemented, tested and secured. With the help of a non-invasive method of monitoring the movement and temperature of the subject using the GENEActiv actigraph allows the system to securely transfer, process and evaluate the subject's sleep data using the machine learning algorithm XGBoost. The proposed system is in accordance with the valid law of the Czech Republic and meets legal requirements.
29

Využití protokolu RTP pro distribuci procesních dat v reálném čase / RTP Protocol for Real-Time Data Distribution

Škarecký, Tomáš January 2010 (has links)
This paper explores the potential of process data distribution in real time using the RTP protocol. Firstly, there is an evaluation of existing protocols that are used for this purpose. Secondly, there is a description of the trio of protocols RTP, RTCP, and RTSP, explained their functions and explored possibilities for their expansion. On the basis of this acquired knowledge, problems which may occur with the distribution of process data by means of RTP are identified and their possible solutions are proposed. To demonstrate the data distribution via RTP, the application, that collects GPS position from the PDA and then provides clients with them, has been designed and implemented. Clients can display these data on the map.
30

Query processing on low-energy many-core processors

Lehner, Wolfgang, Ungethüm, Annett, Habich, Dirk, Karnagel, Tomas, Asmussen, Nils, Völp, Marcus, Nöthen, Benedikt, Fettweis, Gerhard 12 January 2023 (has links)
Aside from performance, energy efficiency is an increasing challenge in database systems. To tackle both aspects in an integrated fashion, we pursue a hardware/software co-design approach. To fulfill the energy requirement from the hardware perspective, we utilize a low-energy processor design offering the possibility to us to place hundreds to millions of chips on a single board without any thermal restrictions. Furthermore, we address the performance requirement by the development of several database-specific instruction set extensions to customize each core, whereas each core does not have all extensions. Therefore, our hardware foundation is a low-energy processor consisting of a high number of heterogeneous cores. In this paper, we introduce our hardware setup on a system level and present several challenges for query processing. Based on these challenges, we describe two implementation concepts and a comparison between these concepts. Finally, we conclude the paper with some lessons learned and an outlook on our upcoming research directions.

Page generated in 0.1049 seconds