• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 12
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Accelerating Graphics Rendering on RISC-V GPUs

Simpson, Joshua 01 June 2022 (has links) (PDF)
Graphics Processing Units (GPUs) are commonly used to accelerate massively parallel workloads across a wide range of applications from machine learning to cryptocurrency mining. The original application for GPUs, however, was to accelerate graphics rendering which remains popular today through video gaming and video rendering. While GPUs began as fixed function hardware with minimal programmability, modern GPUs have adopted a design with many programmable cores and supporting fixed function hardware for rasterization, texture sampling, and render output tasks. This balance enables GPUs to be used for general purpose computing and still remain adept at graphics rendering. Previous work at the Georgia Institute of Technology has been done to implement a general purpose GPU (GPGPU) in the open source RISC-V ISA. The implementation features many programmable cores and texture sampling support. However, creating a truly modern GPU based on the RISC-V ISA requires the addition of fixed function hardware units for rasterization and render output tasks in order to meet the demands of current graphics APIs such as OpenGL or Vulkan. This thesis discusses the work done by students at the Georgia Institute of Technology and California Polytechnic State University SLO to accelerate graphics rendering on RISC-V GPUs including the specific contributions made to implement and connect fixed function graphics hardware for the render output unit (ROP) to the programmable cores in a RISC-V GPU. This thesis also explores the performance and area cost of different hardware configurations within the implemented GPU.
12

OXYBUOY: CONSTRUCTING A REAL-TIME INEXPENSIVE HYPOXIA MONITORING PLATFORM

MOHD NOR, RIZAL 27 October 2009 (has links)
No description available.
13

Ανάπτυξη δικτύου αισθητήρων και πληροφοριακού συστήματος για τη διαχείριση του

Χουλιαρόπουλος, Αναστάσιος 23 January 2012 (has links)
Η παρούσα διπλωματική έχει ως στόχο την παρουσίαση και ανάπτυξη ενός πλήρους πληροφοριακού συστήματος που είναι δυνατόν η εφαρμογή του στο πραγματικό κόσμο να καθιστά ένα σπίτι ως «έξυπνο». Ο σκοπός της δημιουργίας του συστήματος αυτού είναι η μέτρηση και η καταχώρηση των συνθηκών που επικρατούν σε ένα χώρο και ο εντοπισμός κινήσεων που γίνονται μέσα σε αυτόν ώστε να εκτελεστούν κάποιες λειτουργίες αυτόματα. Το εν λόγω πληροφοριακό σύστημα αποτελείται από έναν κεντρικό υπολογιστή ο οποίος συνδέεται με ένα δίκτυο από διάφορους αισθητήρες, με μια βάση δεδομένων και έχει δυνατότητα επικοινωνίας με κινητό τηλέφωνο μέσω 3G δικτύου ώστε να υπάρχει απομακρυσμένη πρόσβαση σε όλες τις λειτουργίες. Έτσι η ανάπτυξη και η παρουσίαση αυτού του συστήματος δίνει τη δυνατότητα δημιουργίας ενός έξυπνου σπιτιού με εξατομικευμένες προδιαγραφές και δυνατότητα επέκτασης του συστήματος. Εν κατακλείδι, η παρούσα διπλωματική εργασία, θέλει να αναδείξει την ευκολία, την απλότητα, την ευελιξία, αλλά και την χρησιμότητα που έχει ένα έξυπνο σπίτι. Παρουσιάζει ουσιαστικά την καρδιά του έξυπνου σπιτιού, ποιες είναι οι βασικές του μονάδες, πώς λειτουργούν και πώς αλληλεπιδρούν μεταξύ τους. / This Thesis aims at presenting and developing a complete system which can apply in the real world to make a home "smart." The purpose of creating this system is the measurement and recording of conditions of the house (humidity, temperature, light density etc) and monitor movements inside restricted areas in order to automate and trigger different operations. This information system consists of a central computer which is connected to a network of several sensors, a database and is also capable of remote access with mobile phone via 3G network. Thus the development and presentation of this system enables the creation of a smart home with personalized specifications and scalability of the system. In conclusion, this thesis wants to demonstrate the ease, simplicity, flexibility, and the utility a smart home has. Presents the heart of the smart house, what are the basic units, how they work and how they interact.
14

Embedded Vision Machine Learning on Embedded Devices for Image classification in Industrial Internet of things

Parvez, Bilal January 2017 (has links)
Because of Machine Learning, machines have become extremely good at image classification in near real time. With using significant training data, powerful machines can be trained to recognize images as good as any human would. Till now the norm has been to have pictures sent to a server and have the server recognize them. With increasing number of sensors the trend is moving towards edge computing to curb the increasing rate of data transfer and communication bottlenecks. The idea is to do the processing locally or as close to the sensor as possible and then only transmit actionable data to the server. While, this does solve plethora of communication problems, specially in industrial settings, it creates a new problem. The sensors need to do this computationally intensive image classification which is a challenge for embedded/wearable devices, due to their resource constrained nature. This thesis analyzes Machine Learning algorithms and libraries from the motivation of porting image classifiers to embedded devices. This includes, comparing different supervised Machine Learning approaches to image classification and figuring out which are most suited for being ported to embedded devices. Taking a step forward in making the process of testing and implementing Machine Learning algorithms as easy as their desktop counterparts. The goal is to ease the process of porting new image recognition and classification algorithms on a host of different embedded devices and to provide motivations behind design decisions. The final proposal goes through all design considerations and implements a prototype that is hardware independent. Which can be used as a reference for designing and then later porting of Machine Learning classifiers to embedded devices. / Maskiner har blivit extremt bra på bildklassificering i nära realtid. På grund av maskininlärning med kraftig träningsdata, kan kraftfulla maskiner utbildas för att känna igen bilder så bra som alla människor skulle. Hittills har trenden varit att få bilderna skickade till en server och sedan få servern att känna igen bilderna. Men eftersom sensorerna ökar i antal, går trenden mot så kallad "edge computing" för att stryka den ökande graden av dataöverföring och kommunikationsflaskhalsar. Tanken är att göra bearbetningen lokalt eller så nära sensorn som möjligt och sedan bara överföra aktiv data till servern. Samtidigt som detta löser överflöd av kommunikationsproblem, speciellt i industriella inställningar, skapar det ett nytt problem. Sensorerna måste kunna göra denna beräkningsintensiva bildklassificering ombord vilket speciellt är en utmaning för inbyggda system och bärbara enheter, på grund av sin resursbegränsade natur. Denna avhandling analyserar maskininlärningsalgoritmer och biblioteken från motivationen att portera generiska bildklassificatorer till inbyggda system. Att jämföra olika övervakade maskininlärningsmetoder för bildklassificering, utreda vilka som är mest lämpade för att bli porterade till inbyggda system, för att göra processen att testa och implementera maskininlärningsalgoritmer lika enkelt som sina skrivbordsmodeller. Målet är att underlätta processen för att portera nya bildigenkännings och klassificeringsalgoritmer på en mängd olika inbyggda system och att ge motivation bakom designbeslut som tagits och för att beskriva det snabbaste sättet att skapa en prototyp med "embedded vision design". Det slutliga förslaget går igenom all hänsyn till konstruktion och implementerar en prototyp som är maskinvaruoberoende och kan användas för snabb framtagning av prototyper och sedan senare överföring av maskininlärningsklassificatorer till inbyggda system.
15

Implementa??o de sistemas baseados em regras nebulosas por m?todo matricial em dispositivos embarcados

Ganselli, Tiago Trevisani 11 December 2014 (has links)
Made available in DSpace on 2016-04-04T18:31:41Z (GMT). No. of bitstreams: 1 Tiago Trevisani Ganselli.pdf: 2054440 bytes, checksum: 19be2b4fac5342b6227e8ce56bfbc1f2 (MD5) Previous issue date: 2014-12-11 / It is known that the need for devices with higher processing capacities and low power consumption is increasing, making algorithm optimization necessary to allow the maximum utilization of the application s resources. In this work, the Matrix Method was implemented in embedded systems to solve Fuzzy calculations, allowing the decision-making process to be included in several applications. Code was developed for Scilab, Arduino, and the embedded Linux distribution OpenWRT, being tested in real devices through the comparison with the original Matrix Method algorithm implementation and the case study of the MAC anomaly in IEEE 802.11 networks. Results show that the Matrix Method is compatible for use in embedded systems, and the analysis and specific configuration of each application are necessary for the best performance to be achieved. Conclusion shows that the balance between the decision-making and the result precision is essential to lower resource consumption to the maximum. It is expected that other studies make use of the created algorithms, assisting the decision-making process in embedded systems for the countless emerging applications. / Com a crescente necessidade de dispositivos com maior capacidade de processamento e menor consumo energ?tico faz-se necess?rio o uso de algoritmos otimizados, permitindo o m?ximo aproveitamento dos recursos dispon?veis na aplica??o. Neste trabalho foi realizada a implementa??o do M?todo Matricial para execu??o de c?lculos usando L?gica Nebulosa em dispositivos embarcados, tornando poss?vel a tomada de decis?o local nas mais diversas aplica??es. Foram desenvolvidos c?digos para o Scilab, Arduino e para a distribui??o de Linux embarcado OpenWRT, que foram testados em dispositivos reais atrav?s da compara??o com o c?digo original do M?todo Matricial e com o estudo de caso da Anomalia da MAC em redes IEEE 802.11. Os resultados obtidos indicam que o M?todo Matricial ? compat?vel com o uso em sistemas embarcados, sendo necess?ria a an?lise e configura??o espec?fica de cada aplica??o para que o melhor desempenho seja alcan?ado. Concluiu-se que o balanceamento entre a tomada de decis?o e a precis?o do resultado ? essencial para realizar o c?lculo com o menor consumo de recursos poss?vel. Espera-se que outros trabalhos fa?am uso dos algoritmos criados, a fim de auxiliar na tomada de decis?o em dispositivos embarcados nas in?meras aplica??es emergentes.
16

Addressing the Consensus Problem in Real-time Using Lightweight Middleware on Distributed Devices

Hall, Keith Anton 2011 August 1900 (has links)
With the advent of the modern technological age, a plethora of electronic tools and devices are available in numbers as never before. While beneficial and ex-ceedingly useful, these electronic devices require users to operate them. When designing systems capable of observing and acting upon an environment, the number of devices can become unmanageable. Previously, middleware sys-tems were designed for large-scale computational systems. However, by apply-ing similar concepts and distributing logic to autonomous agents residing on the devices, a new paradigm in distributed systems research on lightweight de-vices is conceivable. Therefore, this research focuses upon the development of a lightweight middleware that can reside on small devices enabling the capabil-ity for these devices to act autonomously. In this research, analyses determined the most advantageous methods for solving this problem. Defining a set of requirements for the necessary middle-ware as well as assumptions for the environment and system in which it would operate achieved a proper research focus. By utilizing concepts already in ex-istence such as peer-to-peer networking and distributed hash tables, devices in this system could communicate effectively and efficiently. Furthermore, creat-ing custom algorithms for communicating with other devices, and collaborating on task assignments achieved an approach to solving the consensus problem in real time. The resulting middleware solution allowed a demonstration to prove the effi-cacy. Using three devices capable of observing the environment and acting up-on it, two tests highlighted the capabilities of the consensus-finding mechanism as well as the ability of the devices to respond to changes in the environment autonomously.
17

Extending a networked robot system to include humans, tiny devices, and everyday objects

Rashid, Md. Jayedur January 2011 (has links)
In networked robot systems (NRS), robots and robotic devices are distributed in the environment; typically tasks are performed by cooperation and coordination of such multiple networked components. NRS offer advantages over monolithic systems in terms of modularity, flexibility and cost effectiveness, and they are thus becoming a mainstream approach to the inclusion of robotic solutions in everyday environments. The components of a NRS are usually robots and sensors equipped with rich computational and communication facilities. In this thesis, we argue that the capabilities of a NRS would greatly increase if it could also accommodate among its nodes simpler entities, like small ubiquitous sensing and actuation devices, home appliances, or augmented everyday objects. For instance, a domestic robot needs to manipulate food items and interact with appliances. Such a robot would benefit from the ability to exchange information with those items and appliances in a direct way, in the same way as with other networked robots and sensors. Combining such highly heterogeneous devices inside one NRS is challenging, and one of the major challenges is to provide a common communication and collaboration infrastructure. In the field of NRS, this infrastructure is commonly provided by a shared middleware. Unfortunately, current middlewares lack the generality needed to allow heterogeneous entities such as robots, simple ubiquitous devices and everyday objects to coexist in the same system. In this thesis we show how an existing middleware for NRS can be extended to include three new types of “citizens” in the system, on peer with the other robots. First, we include computationally simple embedded devices, like ubiquitous sensors and actuators, by creating a fully compatible tiny version of the existing robotic middleware. Second, we include augmented everyday objects or home appliances which are unable to run the middleware on board, by proposing a generic design pattern based on the notion of object proxy. Finally,we go one step further and include humans as nodes in the NRS by defining the notion of human proxy. While there exist a few other NRS which are able to include both robots and simple embedded devices in the same system, the use of proxies to include everyday objects and humans in a generic way is a unique feature of this work. In order to verify and validate the above concepts, we have implemented them in the Peis-Ecology NRS model. We report a number of experiments based on this implementation, which provide both quantitative and qualitative evaluations of its performance, reliability, and interoperability.
18

Real-time hand segmentation using deep learning / Hand-segmentering i realtid som använder djupinlärning

Favia, Federico January 2021 (has links)
Hand segmentation is a fundamental part of many computer vision systems aimed at gesture recognition or hand tracking. In particular, augmented reality solutions need a very accurate gesture analysis system in order to satisfy the end consumers in an appropriate manner. Therefore the hand segmentation step is critical. Segmentation is a well-known problem in image processing, being the process to divide a digital image into multiple regions with pixels of similar qualities. Classify what pixels belong to the hand and which ones belong to the background need to be performed within a real-time performance and a reasonable computational complexity. While in the past mainly light-weight probabilistic and machine learning approaches were used, this work investigates the challenges of real-time hand segmentation achieved through several deep learning techniques. Is it possible or not to improve current state-of-theart segmentation systems for smartphone applications? Several models are tested and compared based on accuracy and processing speed. Transfer learning-like approach leads the method of this work since many architectures were built just for generic semantic segmentation or for particular applications such as autonomous driving. Great effort is spent on organizing a solid and generalized dataset of hands, exploiting the existing ones and data collected by ManoMotion AB. Since the first aim was to obtain a really accurate hand segmentation, in the end, RefineNet architecture is selected and both quantitative and qualitative evaluations are performed, considering its advantages and analysing the problems related to the computational time which could be improved in the future. / Handsegmentering är en grundläggande del av många datorvisionssystem som syftar till gestigenkänning eller handspårning. I synnerhet behöver förstärkta verklighetslösningar ett mycket exakt gestanalyssystem för att tillfredsställa slutkonsumenterna på ett lämpligt sätt. Därför är handsegmenteringssteget kritiskt. Segmentering är ett välkänt problem vid bildbehandling, det vill säga processen att dela en digital bild i flera regioner med pixlar av liknande kvaliteter. Klassificera vilka pixlar som tillhör handen och vilka som hör till bakgrunden måste utföras i realtidsprestanda och rimlig beräkningskomplexitet. Medan tidigare använts huvudsakligen lättviktiga probabilistiska metoder och maskininlärningsmetoder, undersöker detta arbete utmaningarna med realtidshandsegmentering uppnådd genom flera djupinlärningstekniker. Är det möjligt eller inte att förbättra nuvarande toppmoderna segmenteringssystem för smartphone-applikationer? Flera modeller testas och jämförs baserat på noggrannhet och processhastighet. Transfer learning-liknande metoden leder metoden för detta arbete eftersom många arkitekturer byggdes bara för generisk semantisk segmentering eller för specifika applikationer som autonom körning. Stora ansträngningar läggs på att organisera en gedigen och generaliserad uppsättning händer, utnyttja befintliga och data som samlats in av ManoMotion AB. Eftersom det första syftet var att få en riktigt exakt handsegmentering, väljs i slutändan RefineNetarkitekturen och både kvantitativa och kvalitativa utvärderingar utförs med beaktande av fördelarna med det och analys av problemen relaterade till beräkningstiden som kan förbättras i framtiden.
19

Leakage Conversion For Training Machine Learning Side Channel Attack Models Faster

Rohan Kumar Manna (8788244) 01 May 2020 (has links)
Recent improvements in the area of Internet of Things (IoT) has led to extensive utilization of embedded devices and sensors. Hence, along with utilization the need for safety and security of these devices also increases proportionately. In the last two decades, the side-channel attack (SCA) has become a massive threat to the interrelated embedded devices. Moreover, extensive research has led to the development of many different forms of SCA for extracting the secret key by utilizing the various leakage information. Lately, machine learning (ML) based models have been more effective in breaking complex encryption systems than the other types of SCA models. However, these ML or DL models require a lot of data for training that cannot be collected while attacking a device in a real-world situation. Thus, in this thesis, we try to solve this issue by proposing the new technique of leakage conversion. In this technique, we try to convert the high signal to noise ratio (SNR) power traces to low SNR averaged electromagnetic traces. In addition to that, we also show how artificial neural networks (ANN) can learn various non-linear dependencies of features in leakage information, which cannot be done by adaptive digital signal processing (DSP) algorithms. Initially, we successfully convert traces in the time interval of 80 to 200 as the cryptographic operations occur in that time frame. Next, we show the successful conversion of traces lying in any time frame as well as having a random key and plain text values. Finally, to validate our leakage conversion technique and the generated traces we successfully implement correlation electromagnetic analysis (CEMA) with an approximate minimum traces to disclosure (MTD) of 480.

Page generated in 0.0365 seconds