• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 28
  • 19
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 247
  • 68
  • 50
  • 49
  • 40
  • 38
  • 33
  • 31
  • 23
  • 22
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

WorkflowDSL: Scalable Workflow Execution with Provenance

Fernando, Tharidu January 2017 (has links)
Scientific workflow systems enable scientists to perform large-scale data intensive scientific experiments using distributed computing resources. Due to the diversity of domains and complexity of technology, delivering a successful outcome efficiently requires collaboration between domain experts and technical experts. However, existing scientific workflow systems require a large investment of time to familiarise and adapt existing workflows. Thus, many scientific workflows are still being implemented by script based languages (such as Python and R) due to familiarity and extensive third party library support. In this thesis, we implement a framework that uses a domain specific language that enables domain experts to collaborate on fine-tuning workflows. Technical experts are able to use Python for task implementations. Moreover, the framework includes support for parallel execution without any specialized code. It also provides a provenance capturing framework that enables users to analyse past executions and retrieve complete lineage of any data item generated. Experiments which were performed using a real-world scientific workflow from the bioinformatics domain show that users were able to execute workflows efficiently while using our DSL for workflow composition and Python for task implementations. Moreover, we show that captured provenance can be useful for analysing past workflow executions. / Vetenskapliga arbetsflödessystem gör det möjligt för forskare att utföra storskaliga dataintensiva vetenskapliga experiment med hjälp av distribuerade datorresurser. På grund av mångfalden av domäner, och komplexitet i teknik, krävs samarbete mellan domänexperter och tekniska experter för att på ett effektivt sätt leverera en framgångsrik lösning. Befintliga vetenskapliga arbetsflödessystem kräver dock en stor investering i tid för att bekanta och anpassa befintliga arbetsflöden. Som ett resultat av detta implementeras många vetenskapliga arbetsflöden fortfarande av skriptbaserade språk (som Python och R) på grund av förtrogenhet och omfattande support från tredje part. I denna avhandling implementeras ett framework som använder ett domänsspecifikt språk som gör det möjligt för domänexperter att samarbeta med att finjustera arbetsflöden. Tekniska experter kan använda Python för att genomföra uppgifter. Dessutom innehåller ramverket stöd för parallell exekvering utan någon specialkod. Detta ger också ett ursprungsfångande framework som gör det möjligt för användare att analysera tidigare exekveringar och att hämta fullständiga härstamningar för samtliga genererade dataobjekt. Experiment som utfördes med hjälp av ett verkligt vetenskapligt arbetsflöde från bioinformatikdomänen visar att användarna effektivt kunde utföra arbetsflöden medan de använde en DSL för arbetsflödesammansättning och Python för uppdragsimplementationer. Dessutom visar vi hur fångade ursprung kan vara användbara för att analysera tidigare genomförda arbetsflödesexekveringar.
62

Power Scaling Mechanism for Low Power Wireless Receivers

Ghosal, Kaushik January 2015 (has links) (PDF)
LOW power operation for wireless radio receivers has been gaining importance lately on account of the recent spurt of growth in the usage of ubiquitous embedded mobile devices. These devices are becoming relevant in all domains of human influence. In most cases battery life for these devices continue to be an us-age bottleneck as energy storage techniques have not kept pace with the growing demand of such mobile computing devices. Many applications of these radios have limitations on recharge cycle, i.e. the radio needs to last out of a battery for long duration. This will specially be true for sensor network applications and for im-plantable medical devices. The search for low power wireless receivers has become quite advanced with a plethora of techniques, ranging from circuit to architecture to system level approaches being formulated as part of standard design procedures. However the next level of optimization towards “Smart” receiver systems has been gaining credence and may prove to be the next challenge in receiver design and de-velopment. We aim to proceed further on this journey by proposing Power Scalable Wireless Receivers (PSRX) which have the capability to respond to instantaneous performance requirements to lower power even further. Traditionally low power receivers were designed for worst-case input conditions, namely low signal and high interference, leading to large dynamic range of operation which directly im-pacts the power consumption. We propose to take into account the variation in performance required out of the receiver, under varying Signal and Interference conditions, to trade-off power. We have analyzed, designed and implemented a Power Scalable Receiver tar-geted towards low data-rate receivers which can work for Zigbee or Bluetooth Low Energy (BLE) type standards. Each block of such a receiver system was evaluated for performance-power trade-offs leading to identification of tuning/control knobs at the circuit architecture level of the receiver blocks. Then we developed an usage algorithm for finding power optimal operational settings for the tuning knobs, while guaranteeing receiver reception performance in terms of Bit-Error-Rate (BER). We have proposed and demonstrated a novel signal measurement system to gen-erate digitized estimates of signal and interference strength in the received signal, called Received Signal Quality Indicator (RSQI). We achieve a RSQI average energy consumption of 8.1nJ with a peak energy consumption of 9.4nJ which is quite low compared to the packet reception energy consumption for low power receivers, and will be substantially lower than the energy savings which will be achieved from a power scalable receiver employing a RSQI. The full PSRX system was fabricated in UMC 130nm RF-CMOS process to test out our concepts and to formally quantify the power savings achieved by following the design methodology. The test chip occupied an area of 2.7mm2 with a peak power consumption of 5.5mW for the receiver chain and 18mW for the complete PSRX. We were able to meet the receiver performance requirements for Zigbee standard and achieved about 5X power savings for the range of input condition variations.
63

Um simulador para modelos de larga escala baseado no padrão scalable simulation framework (ssf) / A large-scale model simulator based on the scalable simulation framework (ssf)

Jahnecke, Alexandre Nogueira 06 July 2007 (has links)
Esta dissertação apresenta uma proposta de um simulador de modelos de larga escala para o Ambiente de Simulação Distribuída Automático (ASDA), uma ferramenta que facilita a utilização e o desenvolvimento de simulação distribuída e que vem sendo objeto de pesquisas e estudos no Laboratório de Sistemas Distribuídos e Programação Concorrente (LaSDPC) do ICMC-USP. Tal simulador permite ao ASDA a construção de modelos e programas que simulam modelos de redes de filas de larga escala, operações estas que tornam a ferramenta ainda mais completa. O simulador é baseado no padrão público para simulação distribuída de larga escala denominado Scalable Simulation Framework (SSF). O protótipo do simulador desenvolvido é constituído de um programa cliente-servidor, mas podem ser observados três componentes principais: um compilador, que traduz os modelos escritos em linguagem de modelagem para linguagem C++; o módulo SSF que define a API utilizada pelos programas de simulação; e um módulo de execução, que executa os programas de simulação, analisa os resultados e os repassa para um gerador de relatórios. O simulador contribui ainda com mais estudos acerca de simulação, simulação distribuída e modelagem de sistemas utilizando as ferramentas desenvolvidas pelo grupo / This dissertation presents a proposal for a large-scale model simulator, that is integrated into the Automatic Distributed Simulation Environment (ASDA), a tool that supports the development of distributed simulation, and that has been under studies and investigations in the Laboratory of Distributed Systems and Concurrent Programming at ICMC-USP. The proposed simulator allows ASDA to support the development of models and programs that simulates large-scale queuing models, making ASDA more complete and efficient. The simulator is based on a public standard for large-scale distributed simulation named Scalable Simulation Framework (SSF). The simulator prototype that was developed is a client-server program in which we can observe three main components: one compiler, that translates the models written in a modeling language to a simulation program, written in C++ programming language; the SSF library, that defines the API that is used by the simulation programs; and a runtime environment, which runs the simulation programs, analyzes the results and sends the information to a report builder. The simulator prototype also aggregates to the simulation community more studies regarding simulation, distributed simulation, systems modelling using the internal tools developed by our group
64

Analyse de Performance des Services de Vidéo Streaming Adaptatif dans les Réseaux Mobiles / Performance Analysis of HTTP Adaptive Video Streaming Services in Mobile Networks

Ye, Zakaria 02 May 2017 (has links)
Le trafic vidéo a subi une augmentation fulgurante sur Internet ces dernières années. Pour pallier à cette importante demande de contenu vidéo, la technologie du streaming adaptatif sur HTTP est utilisée. Elle est devenue par ailleurs très populaire car elle a été adoptée par les différents acteurs du domaine de la vidéo streaming. C’est une technologie moins couteuse qui permet aux fournisseurs de contenu, la réutilisation des serveurs web et des caches déjà déployés. En plus, elle est exempt de tout blocage car elle traverse facilement les pare-feux et les translations d’adresses sur Internet. Dans cette thèse, nous proposons une nouvelle méthode de vidéo streaming adaptatif appelé “Backward-Shifted Coding (BSC)”. Il se veut être une solution complémentaire au standard DASH, le streaming adaptatif et dynamique utilisant le protocole HTTP. Nous allons d’abord décrire ce qu’est la technologie BSC qui se base sur le codec (encodeur décodeur) à multi couches SVC, un algorithme de compression extensible ou évolutif. Nous détaillons aussi l’implémentation de BSC dans un environnement DASH. Ensuite,nous réalisons une évaluation analytique de BSC en utilisant des résultats standards de la théorie des files d’attente. Les résultats de cette analyse mathématique montrent que le protocole BSC permet de réduire considérablement le risque d’interruption de la vidéo pendant la lecture, ce dernier étant très pénalisant pour les utilisateurs. Ces résultats vont nous permettre de concevoir des algorithmes d’adaptation de qualité à la bande passante en vue d’améliorer l’expérience utilisateur. Ces algorithmes permettent d’améliorer la qualité de la vidéo même étant dans un environnement où le débit utilisateur est très instable.La dernière étape de la thèse consiste à la conception de stratégies de caching pour optimiser la transmission de contenu vidéo utilisant le codec SVC. En effet, dans le réseau, des serveurs de cache sont déployés dans le but de rapprocher le contenu vidéo auprès des utilisateurs pour réduire les délais de transmission et améliorer la qualité de la vidéo. Nous utilisons la programmation linéaire pour obtenir la solution optimale de caching afin de le comparer avec nos algorithmes proposés. Nous montrons que ces algorithmes augmentent la performance du système tout en permettant de décharger les liens de transmission du réseau cœur. / Due to the growth of video traffic over the Internet in recent years, HTTP AdaptiveStreaming (HAS) solution becomes the most popular streaming technology because ithas been succesfully adopted by the different actors in Internet video ecosystem. Itallows the service providers to use traditional stateless web servers and mobile edgecaches for streaming videos. Further, it allows users to access media content frombehind Firewalls and NATs.In this thesis we focus on the design of a novel video streaming delivery solutioncalled Backward-Shifted Coding (BSC), a complementary solution to Dynamic AdaptiveStreaming over HTTP (DASH), the standard version of HAS. We first describe theBackward-Shifted Coding scheme architecture based on the multi-layer Scalable VideoCoding (SVC). We also discuss the implementation of BSC protocol in DASH environment.Then, we perform the analytical evaluation of the Backward-Sihifted Codingusing results from queueing theory. The analytical results show that BSC considerablydecreases the video playback interruption which is the worst event that users can experienceduring the video session. Therefore, we design bitrate adaptation algorithms inorder to enhance the Quality of Experience (QoE) of the users in DASH/BSC system.The results of the proposed adaptation algorithms show that the flexibility of BSC allowsus to improve both the video quality and the variations of the quality during thestreaming session.Finally, we propose new caching policies to be used with video contents encodedusing SVC. Indeed, in DASH/BSC system, cache servers are deployed to make contentsclosed to the users in order to reduce network latency and improve user-perceived experience.We use Linear Programming to obtain optimal static cache composition tocompare with the results of our proposed algorithms. We show that these algorithmsincrease the system overall hit ratio and offload the backhaul links by decreasing thefetched content from the origin web servers.
65

Um simulador para modelos de larga escala baseado no padrão scalable simulation framework (ssf) / A large-scale model simulator based on the scalable simulation framework (ssf)

Alexandre Nogueira Jahnecke 06 July 2007 (has links)
Esta dissertação apresenta uma proposta de um simulador de modelos de larga escala para o Ambiente de Simulação Distribuída Automático (ASDA), uma ferramenta que facilita a utilização e o desenvolvimento de simulação distribuída e que vem sendo objeto de pesquisas e estudos no Laboratório de Sistemas Distribuídos e Programação Concorrente (LaSDPC) do ICMC-USP. Tal simulador permite ao ASDA a construção de modelos e programas que simulam modelos de redes de filas de larga escala, operações estas que tornam a ferramenta ainda mais completa. O simulador é baseado no padrão público para simulação distribuída de larga escala denominado Scalable Simulation Framework (SSF). O protótipo do simulador desenvolvido é constituído de um programa cliente-servidor, mas podem ser observados três componentes principais: um compilador, que traduz os modelos escritos em linguagem de modelagem para linguagem C++; o módulo SSF que define a API utilizada pelos programas de simulação; e um módulo de execução, que executa os programas de simulação, analisa os resultados e os repassa para um gerador de relatórios. O simulador contribui ainda com mais estudos acerca de simulação, simulação distribuída e modelagem de sistemas utilizando as ferramentas desenvolvidas pelo grupo / This dissertation presents a proposal for a large-scale model simulator, that is integrated into the Automatic Distributed Simulation Environment (ASDA), a tool that supports the development of distributed simulation, and that has been under studies and investigations in the Laboratory of Distributed Systems and Concurrent Programming at ICMC-USP. The proposed simulator allows ASDA to support the development of models and programs that simulates large-scale queuing models, making ASDA more complete and efficient. The simulator is based on a public standard for large-scale distributed simulation named Scalable Simulation Framework (SSF). The simulator prototype that was developed is a client-server program in which we can observe three main components: one compiler, that translates the models written in a modeling language to a simulation program, written in C++ programming language; the SSF library, that defines the API that is used by the simulation programs; and a runtime environment, which runs the simulation programs, analyzes the results and sends the information to a report builder. The simulator prototype also aggregates to the simulation community more studies regarding simulation, distributed simulation, systems modelling using the internal tools developed by our group
66

Scalable Nonparametric L1 Density Estimation via Sparse Subtree Partitioning

Sandstedt, Axel January 2023 (has links)
We consider the construction of multivariate histogram estimators for any density f seeking to minimize its L1 distance to the true underlying density using arbitrarily large sample sizes. Theory for such estimators exist and the early stages of distributed implementations are available. Our main contributions are new algorithms which seek to optimise out unnecessary network communication taking place in the distributed stages of the construction of such estimators using sparse binary tree arithmetics.
67

An industrially scalable process for imparting poly (ethylene terephthalate) (PET) with durable and rechargeable antibacterial functions

Rahman, Md Zahidur 29 February 2016 (has links)
Healthcare-associated infections (HAIs), especially those caused by different antibiotic-resistant bacteria such as methicillin-resistant Staphylococcus aureus (MRSA) and multidrug resistant Pseudomonas aeruginosa are of growing concern in healthcare facilities. Since 1995, overall incidence rates of MRSA in Canadian hospitals have increased 19-fold, leading to unnecessary suffering by patients and increasing costs to hospitals. There have been many reports that link pathogen-carrying hospital textiles and cases of infections. The development of effective, durable and rechargeable antibacterial healthcare textiles is expected to impede the transmission of infectious microorganisms, and act as an additional prevention measure to infection control. N-chloramines have been proven to be one of the most suitable antimicrobial agents to be immobilized onto healthcare textiles to impart them with potent and rechargeable antimicrobial functions. However, the majority of the hospital used medical textiles are synthetic fibers which are chemically inert and hard to be chemically modified with N-chloramine functions. This study focuses on developing an industry scalable process to durably immobilize N-chloramine onto poly (ethylene terephthalate) (PET), a common synthetic fiber used in healthcare textiles. Many techniques have been reported till now to activate the chemically inert PET surface with reactive functional groups. Among all the techniques, aminolysis and plasma treatments have attracted great attention due to their easy process to introduce functional group onto PET and can be set up for large production. However, aminolysis suffers from polymer degradation and plasma treatments suffer from less deposition which hinders these two processes to produce commercial antibacterial textiles. In this study, a new combined process was introduced by combining aminolysis and plasma treatments in a specific way that not only minimize the problems associated with these two processes but also can create more N-chloramine precursor functional groups onto the surface of PET. The covalently bonded N-chloramine precursor groups can be easily converted to N-chloramine by dilute sodium hypochlorite solution. The presence of nitrogen on the PET substrates after the modification was confirmed by CHNS/O elemental analyzer and ATR/FTIR analysis showing a successful incorporation of N-chloramine precursor. The morphology of the treated fibers was kept relatively similar with a slight decrease in their diameter. Moreover, the tensile strength of the treated fabric was also acceptably maintained. The N-chloramine modified PET presented highly effective antimicrobial properties, even after 50 home launderings the rechargeable treated fabric demonstrated 100% reduction of both MRSA and P. aeruginosa within a contact time of 5 min. / May 2016
68

SATELLITE GROUND STATION SECURITY USING SSH TUNNELING

Mauldin, Kendall 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / As more satellite ground station systems use the Internet as a means of connectivity, the security of the ground stations and data transferred between stations becomes a growing concern. Possible solutions include software-level password authentication, link encryption, IP filtering, and several others. Many of these methods are being implemented in many different applications. SSH (Secure Shell) tunneling is one specific method that ensures a highly encrypted data link between computers on the Internet. It is used every day by individuals and organizations that want to ensure the security of the data they are transferring over the Internet. This paper describes the security requirements of a specific example of a ground station network, how SSH can be implemented into the existing system, software configuration, and operational testing of the revised ground network.
69

Modular Field Programmable Gate Array Implementation of a MIMO Transmitter

Shekhar, Richa 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Multiple-Input Multiple-Output (MIMO) systems have at least two transmitting antennas, each generating unique signals. However some applications may require three, four, or more transmitting devices to achieve the desired system performance. This paper describes the design of a scalable MIMO transmitter, based on field programmable gate array (FPGA) technology. Each module contains a FPGA, and associated digital-to-analog converters, I/Q modulators, and RF amplifiers needed to power one of the MIMO transmitters. The system was designed to handle up to a 10 Mbps data rate, and transmit signals in the unlicensed 2.4 GHz ISM band.
70

Scale and Concurrency of Massive File System Directories

Patil, Swapnil 01 May 2013 (has links)
File systems store data in files and organize these files in directories. Over decades, file systems have evolved to handle increasingly large files: they distribute files across a cluster of machines, they parallelize access to these files, they decouple data access from metadata access, and hence they provide scalable file access for high-performance applications. Sadly, most cluster-wide file systems lack any sophisticated support for large directories. In fact, most cluster file systems continue to use directories that were designed for humans, not for large-scale applications. The former use-case typically involves hundreds of files and infrequent concurrent mutations in each directory, while the latter use-case consists of tens of thousands of concurrent threads that simultaneously create large numbers of small files in a single directory at very high speeds. As a result, most cluster file systems exhibit very poor file create rate in a directory either due to limited scalability from using a single centralized directory server or due to reduced concurrency from using a system-wide synchronization mechanism. This dissertation proposes a directory architecture called GIGA+ that enables a directory in a cluster file system to store millions of files and sustain hundreds of thousands of concurrent file creations every second. GIGA+ makes two indexing technique to scale out a growing directory on many servers and an efficient layered design to scale up performance. GIGA+ uses a hash-based, incremental partitioning algorithm that enables highly concurrent directory indexing through asynchrony and eventual consistency of the internal indexing state (while providing strong consistency guarantees to the application data). This dissertation analyzes several trade-offs between data migration overhead, load balancing effectiveness, directory scan performance, and entropy of indexing state made by the GIGA+ design, and compares them with policies used in other systems. GIGA+ also demonstrates a modular implementation that separates directory distribution from directory representation. It layers a client-server middleware, which spreads work among many GIGA+ servers, on top of a backend storage system, which manages on-disk directory representation. This dissertation studies how system behavior is tightly dependent on both the indexing scheme and the on-disk implementations, and evaluates how the system performs for different backend configurations including local and shared-disk stores. The GIGA+ prototype delivers highly scalable directory performance (that exceeds the most demanding Petascale-era requirements), provides the traditional UNIX file system interface (that can run applications without any modifications) and offers a new functionality layered on existing cluster file systems (that lack support for distributed directories)contributions: a concurrent

Page generated in 0.0532 seconds