• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 347
  • 40
  • 34
  • 33
  • 30
  • 26
  • 23
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1024
  • 1024
  • 331
  • 274
  • 189
  • 129
  • 112
  • 89
  • 88
  • 87
  • 77
  • 72
  • 71
  • 68
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Saving Energy in Network Hosts With an Application Layer Proxy: Design and Evaluation of New Methods That Utilize Improved Bloom Filters

Jimeno, Miguel 11 December 2009 (has links)
One of the most urgent challenges of the 21st century is to investigate new technologies that can enable a transition towards a society with a reduced CO2 footprint. Information Technology generates about 2% of the global CO2, which is comparable to the aviation industry. Being connected to the Internet requires active participation in responding to protocol messages. Billions of dollars worth of electricity every year are used to keep network hosts fully powered-on at all times only for the purpose of maintaining network presence. Most network hosts are idle most of the time, thus presenting a huge opportunity for energy savings and reduced CO2 emissions. Proxying has been previously explored as a means for allowing idle hosts to sleep yet still maintain network presence. This dissertation develops general requirements for proxying and is the first exploration of application-level proxying. Proxying for TCP connections, SIP, and Gnutella P2P was investigated. The TCP proxy keeps TCP connections open (when a host is sleeping) and buffers and/or discards packets as appropriate. The SIP proxy handles all communication with the SIP server and wakes up a sleeping SIP phone on an incoming call. The P2P proxy enables a Gnutella leaf node to sleep when not actively uploading or downloading files by handling all query messages and keyword lookups in a list of shared files. All proxies were prototyped and experimentally evaluated. Proxying for P2P lead to the exploration of space and time efficient data structures to reduce the computational requirements of keyword search in the proxy. The use of pre-computation and hierarchical structures for reducing the false positive rate of a Bloom filter was explored. A Best-of-N Bloom filter was developed, which was shown to have a lower false positive rate than a standard Bloom filter and the Power-of-2 Bloom filter. An analysis of the Best-of-N Bloom Filter was completed using Order Statistics to predict the false positive rate. Potential energy savings are shown to be in the hundreds of millions of dollars per year assuming a modest adoption rate of the methods investigated in this dissertation. Future directions could lead to greater savings.
402

Communication Features Associated with Clinical Performance and Non-technical Skills in Healthcare Settings

Yuhao Peng (8086361) 06 December 2019 (has links)
<p>Effective teamwork and communication are critical to patient outcomes, and subjective assessment tools have been developed for measuring team performance using both technical and non-technical skills. However, inherent biases remain with using subjective assessment tools.</p><p>In this study, 3rd-year medical students participated in the Acute Care Trauma Simulation (ACTS). The student performed the role of clinician in a team that included a nurse and a simulated patient. Participants conducted post-operative patient management, patient care diagnoses, and treatment. Audio from all team members was recorded, and speech variables (e.g., speech duration, frequency of interaction, etc.) from student’s audio were extracted.</p><p>The models for Research Question I showed that increasing frequency of checkbacks between student and nurse (p<0.05) and speech duration from student to patient (p=0.001) significantly increased student’s clinical performance score. In Research Question II, a positive association (ρ=0.456, p<0.001) between speech duration from student to patient and overall NTS scores was observed, and this correlation was the strongest amongst all other vocal features with overall NTS score.</p><p>Both studies showed significant positive relationships between key vocal features (e.g., speech duration), frequency of communication with respect to performance. Metrics and vocal features derived from audio recordings can be measured in predicting clinical performance and NTS, moreover, it can further contribute to the understanding of communication in the healthcare setting. Most importantly, the potential of providing an objective approach for simulation-based trauma care training.</p><div><br></div>
403

The influence of multi-walled carbon nanotubes on single-phase heat transfer and pressure drop characteristics in the transitional flow regime of smooth tubes

Grote, Kersten 10 June 2013 (has links)
There are in general two different types of studies concerning nanofluids. The first one concerns itself with the study of the effective thermal conductivity and the other with the study of convective heat transfer enhancement. The study on convective heat transfer enhancement generally incorporates the study on the thermal conductivity. Not many papers have been written on the convective heat transfer enhancement and even fewer concerning the study on multi-walled carbon nanotubes in the transitional flow regime. In this paper the thermal conductivity and viscosity was determined experimentally in order to study the convective heat transfer enhancement of the nanofluids. Multi-walled carbon nanotubes suspended in distilled water flowing through a straight, horizontal tube was investigated experimentally for a Reynolds number range of a 1 000 - 8 000, which included the transitional flow regime. The tube was made out of copper and has an internal diameter of 5.16 mm. Results on the thermal conductivity and viscosity indicated that they increase with nanoparticle concentration. Convective heat transfer experiments were conducted at a constant heat flux of 13 kW/m2 with 0.33%, 0.75% and 1.0% volume concentrations of multi-walled carbon nanotubes. The nanotubes had an outside diameter of 10 - 20 nm, an inside diameter of 3 - 5 nm and a length of 10 - 30 μm. Temperature and pressure drop measurements were taken from which the heat transfer coefficients and friction factors were determined as a function of Reynolds number. The thermal conductivities and viscosities of the nanofluids were also determined experimentally so that the Reynolds and Nusselt numbers could be determined accurately. It was found that heat transfer was enhanced when comparing the data on a Nusselt number as a function of Reynolds number graph but comparing the results on a heat transfer coefficient as a function of average velocity graph the opposite effect was observed. Performance evaluation of the nanofluids showed that the increase in viscosity was four times the increase in the thermal conductivity which resulted in an inefficient nanofluid. However, a study on the performance evaluation criterion showed that operating nanofluids in the transition and turbulent flow regime due to the energy budget being better than that of the distilled water. / Dissertation (MEng)--University of Pretoria, 2012. / Mechanical and Aeronautical Engineering / unrestricted
404

Raspberry Pi Based Vision System for Foreign Object Debris (FOD) Detection

Mahammad, Sarfaraz Ahmad, Sushma, Vendrapu January 2020 (has links)
Background: The main purpose of this research is to design and develop a cost-effective system for detection of Foreign Object Debris (FOD), dedicated to airports. FOD detection has been a significant problem at airports as it can cause damage to aircraft. Developing such a device to detect FOD may require complicated hardware and software structures. The proposed solution is based on a computer vision system, which comprises of flexible off the shelf components such as a Raspberry Pi and Camera Module, allowing the simplistic and efficient way to detect FOD. Methods: The solution to this research is achieved through User-centered design, which implies to design a system solution suitably and efficiently. The system solution specifications, objectives and limitations are derived from this User-centered design. The possible technologies are concluded from the required functionalities and constraints to obtain a real-time efficient FOD detection system. Results: The results are obtained using background subtraction for FOD detection and implementation of SSD (single-shot multi-box detector) model for FOD classification. The performance evaluation of the system is analysed by testing the system to detect FOD of different size for different distances. The web design is also implemented to notify the user in real-time when there is an occurrence of FOD. Conclusions: We concluded that the background subtraction and SSD model are the most suitable algorithms for the solution design with Raspberry Pi to detect FOD in a real-time system. The system performs in real-time, giving the efficiency of 84% for detecting medium-sized FOD such as persons at a distance of 75 meters and 72% efficiency for detecting large-sized FOD such as cars at a distance of 125 meters, and the average frame per second (fps) that is the system ’s performance in recording and processing frames of the area required to detect FOD is 0.95.
405

Accurate workload design for web performance evaluation.

Peña Ortiz, Raúl 13 February 2013 (has links)
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica / Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054 / Palancia
406

Performance Evaluation of Stereo Reconstruction Algorithms on NIR Images / Utvärdering av algoritmer för stereorekonstruktion av NIR-bilder

Vidas, Dario January 2016 (has links)
Stereo vision is one of the most active research areas in computer vision. While hundreds of stereo reconstruction algorithms have been developed, little work has been done on the evaluation of such algorithms and almost none on evaluation on Near-Infrared (NIR) images. Of almost a hundred examined, we selected a set of 15 stereo algorithms, mostly with real-time performance, which were then categorized and evaluated on several NIR image datasets, including single stereo pair and stream datasets. The accuracy and run time of each algorithm are measured and compared, giving an insight into which categories of algorithms perform best on NIR images and which algorithms may be candidates for real-time applications. Our comparison indicates that adaptive support-weight and belief propagation algorithms have the highest accuracy of all fast methods, but also longer run times (2-3 seconds). On the other hand, faster algorithms (that achieve 30 or more fps on a single thread) usually perform an order of magnitude worse when measuring the per-centage of incorrectly computed pixels.
407

Performance of a Micro-CT System : Characterisation of Hamamatsu X-ray source L10951-04 and flat panel C7942CA-22 / Prestanda hos ett Micro-CT System : Karaktärisering av Hamamatsu röntgenkälla L10951-04 och plattpanel C7942CA-22

Baumann, Michael January 2014 (has links)
This master thesis evaluated the performance of a micro-CT system consisting of Hamamatsu microfocus X-ray source L10951-04 and CMOS flat panel C7942CA-22. The X-ray source and flat panel have been characterised in terms of dark current, image noise and beam profile. Additionally, the micro-CT system’s spatial resolution, detector lag and detector X-ray response have been measured. Guidance for full image correction and methods for characterisation and performance test of the X-ray source and detector is presented. A spatial resolution of 7 lp/mm at 10 % MTF was measured. A detector lag of 0.3 % was observed after ten minutes of radiation exposure. The performance of the micro-CT system was found to be sufficient for high resolution X-ray imaging. However, the detector lag effect is strong enough to reduce image quality during subsequent image acquisition and must either be avoided or corrected for.
408

Service Management for P2P EnergySharing Scenarios Using Blockchain--Identification of Performance of Computational efforts

Patha, Ragadeep January 2022 (has links)
Peer-to-Peer energy trading enables the prosumers and consumers to trade their energy in a simple services.By this the energy users have possibility to have a surplusshare of energy without any interruptions[1].But for the higher deployment of thep2p energy services, the allocation of the resources for the energy trading transactions are also challenging to model in these days. Blockchain technology, which isof a distributed ledger system and also provides a secure way of sharing the information between the peers of the network, is suitable for the proposed p2p energytrading model which can be useful for the higher scale deployments. This thesis provides an initial implementation of the p2p energy trading modelusing the blockchain and also measures the performance of the implemented modelwith the computational.A literature review is conducted for obtaining the previousstudies related to p2p energy trading using blockchain with the performance evaluation.Then the technologies related to the thesis are described and from the literaturestudies the required models are described and considered for proposing the systemmodel for the thesis. The implemented system model is also analyzed with different computational efforts for the service management functions. For generating the transactions, a Fabricclient SDK is created, which ensures that each transaction communicates with theblockchain’s smart contract for the secured transaction. Finally, after measuring thecomputational efforts, I want to observe the performance outcome for the measuredcomputational parameters so that the system’s behavior can be analyzed when thetransactions are happening between the peers by using the specific blockchain technology.
409

Computer systems in airborne radar : Virtualization and load balancing of nodes

Isenstierna, Tobias, Popovic, Stefan January 2019 (has links)
Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance? Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized. Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility. Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment. Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.
410

Modélisation de performance des caches basée sur l'analyse de données / A Data Driven Approach for Cache Performance Modeling

Olmos Marchant, Luis Felipe 30 May 2016 (has links)
L’Internet d’aujourd’hui a une charge de trafic de plus en plus forte à cause de la prolifération des sites de vidéo, notamment YouTube. Les serveurs Cache jouent un rôle clé pour faire face à cette demande qui croît vertigineusement. Ces serveurs sont déployés à proximité de l’utilisateur, et ils gardent dynamiquement les contenus les plus populaires via des algorithmes en ligne connus comme « politiques de cache ». Avec cette infrastructure les fournisseurs de contenu peuvent satisfaire la demande de façon efficace, en réduisant l’utilisation des ressources de réseau. Les serveurs Cache sont les briques basiques des Content Delivery Networks (CDNs), que selon Cisco fourniraient plus de 70% du trafic de vidéo en 2019.Donc, d’un point de vue opérationnel, il est très important de pouvoir estimer l’efficacité d’un serveur Cache selon la politique employée et la capacité. De manière plus spécifique, dans cette thèse nous traitons la question suivante : Combien, au minimum, doit-on investir sur un serveur cache pour avoir un niveau de performance donné?Produit d’une modélisation qui ne tient pas compte de la façon dont le catalogue de contenus évolue dans le temps, l’état de l’art de la recherche fournissait des réponses inexactes à la dernière question.Dans nos travaux, nous proposons des nouveaux modèles stochastiques, basés sur les processus ponctuels, qui permettent d’incorporer la dynamique du catalogue dans l’analyse de performance. Dans ce cadre, nous avons développé une analyse asymptotique rigoureuse pour l’estimation de la performance d’un serveur Cache pour la politique « Least Recently Used » (LRU). Nous avons validé les estimations théoriques avec longues traces de trafic Internet en proposant une méthode de maximum de vraisemblance pour l’estimation des paramètres du modèle. / The need to distribute massive quantities of multimedia content to multiple users has increased tremendously in the last decade. The current solution to this ever-growing demand are Content Delivery Networks, an application layer architecture that handle nowadays the majority of multimedia traffic. This distribution problem has also motivated the study of new solutions such as the Information Centric Networking paradigm, whose aim is to add content delivery capabilities to the network layer by decoupling data from its location. In both architectures, cache servers play a key role, allowing efficient use of network resources for content delivery. As a consequence, the study of cache performance evaluation techniques has found a new momentum in recent years.In this dissertation, we propose a framework for the performance modeling of a cache ruled by the Least Recently Used (LRU) discipline. Our framework is data-driven since, in addition to the usual mathematical analysis, we address two additional data-related problems: The first is to propose a model that a priori is both simple and representative of the essential features of the measured traffic; the second, is the estimation of the model parameters starting from traffic traces. The contributions of this thesis concerns each of the above tasks.In particular, for our first contribution, we propose a parsimonious traffic model featuring a document catalog evolving in time. We achieve this by allowing each document to be available for a limited (random) period of time. To make a sensible proposal, we apply the "semi-experimental" method to real data. These "semi-experiments" consist in two phases: first, we randomize the traffic trace to break specific dependence structures in the request sequence; secondly, we perform a simulation of an LRU cache with the randomized request sequence as input. For candidate model, we refute an independence hypothesis if the resulting hit probability curve differs significantly from the one obtained from original trace. With the insights obtained, we propose a traffic model based on the so-called Poisson cluster point processes.Our second contribution is a theoretical estimation of the cache hit probability for a generalization of the latter model. For this objective, we use the Palm distribution of the model to set up a probability space whereby a document can be singled out for the analysis. In this setting, we then obtain an integral formula for the average number of misses. Finally, by means of a scaling of system parameters, we obtain for the latter expression an asymptotic expansion for large cache size. This expansion quantifies the error of a widely used heuristic in literature known as the "Che approximation", thus justifying and extending it in the process.Our last contribution concerns the estimation of the model parameters. We tackle this problem for the simpler and widely used Independent Reference Model. By considering its parameter (a popularity distribution) to be a random sample, we implement a Maximum Likelihood method to estimate it. This method allows us to seamlessly handle the censor phenomena occurring in traces. By measuring the cache performance obtained with the resulting model, we show that this method provides a more representative model of data than typical ad-hoc methodologies.

Page generated in 0.3911 seconds