Spelling suggestions: "subject:"lossy"" "subject:"mossy""
1 |
CALCULATING POWER SPECTRAL DENSITY IN A NETWORKBASED TELEMETRY SYSTEMBrierley, Scott 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Calculating the power spectral density (PSD) at the transducer or data acquisition system offers advantages in a network-based telemetry system. The PSD is provided in real time to the users. The conversion to PSD can either be lossless (allowing a complete reconstruction of the transducer signal) or lossy (providing data compression). Post-processing can convert the PSD back to time histories if desired. A complete reconstruction of the signal is possible, including knowledge of the signal level between the sample periods. Properly implemented, this method of data collection provides a sharp anti-aliasing filter with minimal added cost. Currently no standards exist for generating PSDs on the vehicle. New standards could help telemetry system designers understand the benefits and limitations calculating the power spectral density in a network-based telemetry system.
|
2 |
Pervasive service discovery in low-power and lossy networksDjamaa, B 05 October 2016 (has links)
Pervasive Service Discovery (SD) in Low-power and Lossy Networks (LLNs) is expected to play a major role in realising the Internet of Things (IoT) vision. Such a vision aims to expand the current Internet to interconnect billions of miniature smart objects that sense and act on our surroundings in a way that will revolutionise the future. The pervasiveness and heterogeneity of such low-power devices requires robust, automatic, interoperable and scalable deployment and operability solutions. At the same time, the limitations of such constrained devices impose strict challenges regarding complexity, energy consumption, time-efficiency and mobility.
This research contributes new lightweight solutions to facilitate automatic deployment and operability of LLNs. It mainly tackles the aforementioned challenges through the proposition of novel component-based, automatic and efficient SD solutions that ensure extensibility and adaptability to various LLN environments. Building upon such architecture, a first fully-distributed, hybrid pushpull SD solution dubbed EADP (Extensible Adaptable Discovery Protocol) is proposed based on the well-known Trickle algorithm. Motivated by EADPs’ achievements, new methods to optimise Trickle are introduced. Such methods allow Trickle to encompass a wide range of algorithms and extend its usage to new application domains. One of the new applications is concretized in the TrickleSD protocol aiming to build automatic, reliable, scalable, and time-efficient SD. To optimise the energy efficiency of TrickleSD, two mechanisms improving broadcast communication in LLNs are proposed. Finally, interoperable standards-based SD in the IoT is demonstrated, and methods combining zero-configuration operations with infrastructure-based solutions are proposed.
Experimental evaluations of the above contributions reveal that it is possible to achieve automatic, cost-effective, time-efficient, lightweight, and interoperable SD in LLNs. These achievements open novel perspectives for zero-configuration capabilities in the IoT and promise to bring the ‘things’ to all people everywhere.
|
3 |
Lossy Filter SynthesisNasirahmadi, Saman 23 September 2013 (has links)
All telecommunication systems, such as cellular mobile networks (cellphones), object-detection systems (radars), and navigation systems that include satellite positioning systems (GPS), base their functioning on radio wave radiation with pre-defined frequencies and thus require a microwave filter to select the most appropriate frequencies. Generally speaking, the more highly-selective a filter is, the less non-useful frequencies and interference it picks up. Recent advances in microwave instruments, semiconductors, fabrication technologies and microwave filters applications have ushered in a new era in performance but have also brought significant challenges, such as keeping fabrication costs low, miniaturizing, and making low-profile devices. These challenges must be met while at the same time maintaining the performance of conventional devices. The thesis proposes use of lossy filter concepts to maintain high quality filtering frequency response flatness and selectivity regardless of the filter’s physical size. The method is applied to lumped element filters. It introduces resistances to the physical structure of the filter and hence a certain amount of loss to the frequency response of the filter. The lossy filter synthesis is based on the coupling matrix mode. The thesis also proposes modifications to the traditional lossy filter design techniques, to improve the filter performance in the stopband.
|
4 |
Lossy Filter SynthesisNasirahmadi, Saman 23 September 2013 (has links)
All telecommunication systems, such as cellular mobile networks (cellphones), object-detection systems (radars), and navigation systems that include satellite positioning systems (GPS), base their functioning on radio wave radiation with pre-defined frequencies and thus require a microwave filter to select the most appropriate frequencies. Generally speaking, the more highly-selective a filter is, the less non-useful frequencies and interference it picks up. Recent advances in microwave instruments, semiconductors, fabrication technologies and microwave filters applications have ushered in a new era in performance but have also brought significant challenges, such as keeping fabrication costs low, miniaturizing, and making low-profile devices. These challenges must be met while at the same time maintaining the performance of conventional devices. The thesis proposes use of lossy filter concepts to maintain high quality filtering frequency response flatness and selectivity regardless of the filter’s physical size. The method is applied to lumped element filters. It introduces resistances to the physical structure of the filter and hence a certain amount of loss to the frequency response of the filter. The lossy filter synthesis is based on the coupling matrix mode. The thesis also proposes modifications to the traditional lossy filter design techniques, to improve the filter performance in the stopband.
|
5 |
Pervasive service discovery in low-power and lossy networksDjamaa, B. January 2016 (has links)
Pervasive Service Discovery (SD) in Low-power and Lossy Networks (LLNs) is expected to play a major role in realising the Internet of Things (IoT) vision. Such a vision aims to expand the current Internet to interconnect billions of miniature smart objects that sense and act on our surroundings in a way that will revolutionise the future. The pervasiveness and heterogeneity of such low-power devices requires robust, automatic, interoperable and scalable deployment and operability solutions. At the same time, the limitations of such constrained devices impose strict challenges regarding complexity, energy consumption, time-efficiency and mobility. This research contributes new lightweight solutions to facilitate automatic deployment and operability of LLNs. It mainly tackles the aforementioned challenges through the proposition of novel component-based, automatic and efficient SD solutions that ensure extensibility and adaptability to various LLN environments. Building upon such architecture, a first fully-distributed, hybrid pushpull SD solution dubbed EADP (Extensible Adaptable Discovery Protocol) is proposed based on the well-known Trickle algorithm. Motivated by EADPs’ achievements, new methods to optimise Trickle are introduced. Such methods allow Trickle to encompass a wide range of algorithms and extend its usage to new application domains. One of the new applications is concretized in the TrickleSD protocol aiming to build automatic, reliable, scalable, and time-efficient SD. To optimise the energy efficiency of TrickleSD, two mechanisms improving broadcast communication in LLNs are proposed. Finally, interoperable standards-based SD in the IoT is demonstrated, and methods combining zero-configuration operations with infrastructure-based solutions are proposed. Experimental evaluations of the above contributions reveal that it is possible to achieve automatic, cost-effective, time-efficient, lightweight, and interoperable SD in LLNs. These achievements open novel perspectives for zero-configuration capabilities in the IoT and promise to bring the ‘things’ to all people everywhere.
|
6 |
Wave reflection from a lossy uniaxial mediaAzam, Md. Ali January 1995 (has links)
No description available.
|
7 |
Enabling Approximate Storage through Lossy Media Data CompressionWorek, Brian David 08 February 2019 (has links)
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data. / MS / Computer memory systems present challenges in the quest for more powerful overall computing systems. Computer applications with the ability to learn from large sets of data in particular are limited because they need to frequently access the memory system. These applications are capable of intelligent analysis and decision making due to their ability to learn, identify, and create complex data models. To meet growing demand for intelligent applications in smartphones and other Internet connected devices, designers will need novel techniques to improve energy consumption and performance. Fortunately, many intelligent applications are naturally resistant to errors, which means they can produce acceptable outputs even when there are errors in inputs or computation. Approximate storage techniques across computer hardware and software exploit this error resistance to improve the energy consumption and performance of computer memory by purposefully reducing data precision. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.
|
8 |
Přibližná extrakce frázové tabulky z velkého paralelního korpusu / Přibližná extrakce frázové tabulky z velkého paralelního korpusuPrzywara, Česlav January 2013 (has links)
The aim of this work is to examine the applicability of an algorithm for approximate frequency counting to act as an on-the-fly filter in the process of phrase table extraction in Statistical Machine Translation systems. Its implementation allows for the bulk of extracted phrase pairs to be much reduced with no significant loss to the ultimate quality of the phrase-based translation model as measured by the state-of-the-art evaluation measure BLEU. The result of this implementation is a fully working program, called eppex, capable of acting as an alternative to the existing tools for phrase table creation and filtration that are part of the open-source SMT system Moses. A substantial part of this work is devoted to the benchmarking of both the runtime performance and the quality of produced phrase tables achieved by the program when confronted with parallel training data comprised of 2 billions of words. Powered by TCPDF (www.tcpdf.org)
|
9 |
Perceived audio quality of compressed audio in game dialogueAhlberg, Anton January 2016 (has links)
A game could have thousands of sound assets, to fit all of those files to a manageable storage space it is often necessary to reduce the size of the files to a more manageable size so they have to be compressed. One type of sound that often takes up a lot of disc space (because there is so much of it) is dialogue. In the popular game engine Unreal Engine 4 (UE4) the the audio is compressed to Ogg Vorbis and has as default the bit rate is set to 104 kbit/s. The goal of this paper is to see if untrained listeners find dialogue compressed in Ogg Vorbis 104 kbit/s good enough for dialogue or if they prefer higher bit rates. A game was made in UE4 that would act as a listening test. Dialogue audio was recorded with a male and a female voice-actor and was compressed in UE4 in six different bit rates. 24 untrained subjects was asked to play the game and identify the two out of six robots with the dialogue audio they thought sound the best. The results show that the subjects prefer the higher bit rates that was tested. The results was analyzed with a chi-squared test which showed that the null-hypothesis can be rejected. Only 21% of the answers were towards UE4s default bit rate of 104 kbit/s or lower. The result suggest that the subjects prefer dialogue in higher bit rates and UE4 should raise the default bit rate.
|
10 |
Forward Error Correction for Packet Switched NetworksValverde Martínez, David, Parada Otte, Francisco Javier January 2008 (has links)
The main goal in this thesis is to select and test Forward Error Correction (FEC) schemes suitable for network video transmission over RTP/UDP. There is a general concern in communication networks which is to achieve a tradeoff between reliable transmission and the delay that it takes. Our purpose is to look for techniques that improve the reliability while the realtime delay constraints are fulfilled. In order to achieve it, the FEC techniques focus on recovering the packet losses that come up along any transmission. The FEC schemes that we have selected are Parity Check algorithm, ReedSolomon (RS) codes and a Convolutional code. Simulations are performed to test the different schemes. The results obtained show that the RS codes are the more powerful schemes in terms of recovery capabilities. However they can not be deployed for every configuration since they go beyond the delay threshold. On the other hand, despite of the Parity Check codes being the less efficient in terms of error recovery, they show a reasonable low delay. Therefore, depending on the packet loss probability that we are working with, we may chose one or other of the different schemes. To summarize, this thesis includes a theoretical background, a thorough analysis of the FEC schemes chosen, simulation results, conclusions and proposed future work.
|
Page generated in 0.0418 seconds