Spelling suggestions: "subject:"data center,"" "subject:"mata center,""
71 |
A Comparative Study of Monitoring Data Center Temperature Through Visualizations in Virtual Reality Versus 2D Screen / En jämförande studie av datacentrets temperaturövervakning genom visualiseringar i virtuell verklighet och på 2D skärmNevalainen, Susanna January 2018 (has links)
Due to constantly increasing amount of data, the need for more efficient data center management solutions has increased dramatically. Approximately 40% of the costs for data centers is associated with cooling, making temperature management of data centers vital for data center profitability, performance, and sustainability. Current data center hardware management software lack a visual and contextual approach to data center monitoring, overlooking the hierarchical and spatiotemporal data structures of data center data in its design. This study compared two potential data center temperature visualizations — 3D visualization in virtual reality (VR) and 2D visualization on 2D screen — in terms of time to task completion, accuracy rate, and user satisfaction. Results of a within-subject user study with 13 data center specialists indicated that users perceived three-dimensional data center racks and devices more efficiently in VR than in a 2D visualization, whereas a two-dimensional graph was interpreted more efficiently and accurately on a 2D screen. The user satisfaction of both implemented visualizations scored over 80 in a System Usability Scale (SUS) survey, showing that the implemented visualizations have significant potential to improve data center temperature management. / På grund av den ständigt ökande mängden av data, har behovet till effektivare datacenterhanteringslösningar ökat dramatiskt. Cirka 40% av kostnaderna för datacentrar används till kylning, vilket gör temperaturhanteringen till en kritisk del av datacentrets lönsamhet, prestanda och hållbarhet. Nuvarande datacenterhanteringsprogramvaror saknar visuella och kontextuella tillvägagångssätt för datacenterövervakning och förbiser de hierarkiska och spatiotemporala datastrukturerna för datacenterdata i programvarudesign. Denna studie jämförde två potentiella datacentertemperaturvisualiseringar — en tredimensionell visualisering i virtuell verklighet (VV) och en tvådimensionell visualisering på en 2D skärm — i jämförelsen beaktas tid till uppgiftens slutförande, antalet riktiga svar och tillfredsställelse av användaren. Resultatet av användarstudien med 13 datacenterspecialister antydde att användare uppfattar tredimensionellaelektronikrack och enheter snabbare i VV än med 2D-visualisering, medan en tvådimensionell graf tolkas snabbare och noggrannare på en 2D skärm. Användartillfredsställelse av båda visualiseringarna fick över 80 poäng i SUS mätningen, vilket antyder att de genomförda visualiseringarna har en stor potential för att förbättra datacentertemperaturhanteringen.
|
72 |
Diversifying The InternetLiao, Yong 01 May 2010 (has links)
Diversity is a widely existing and much desired property in many networking systems. This dissertation studies diversity problems in Internet, which is the largest computer networking system in the world. The motivations of diversifying the Internet are two-fold. First, diversifying the Internet improves the Internet routing robustness and reliability. Most problems we have encountered in our daily use of Internet, such as service interruptions and service quality degradation, are rooted in the inter-domain routing system of Internet. Inter-domain routing is policy-based routing, where policies are often based on commercial agreements between ASes. Although people know how to safely accommodate a few commercial agreements in inter-domain routing, for a large set of diverse commercial agreements, it is not clear yet what policy guidelines can accommodate them and guarantee convergence. Accommodating diverse commercial agreements not only is needed for ASes in Internet to achieve their business goals, it also provides more path diversity in inter-domain routing, which potentially benefits the inter-domain routing system. However, more reliable and robust routing cannot be achieve unless the routing system exploits the path diversity well. However, that is not the case for the current inter-domain routing system. There exist many paths in the underlying network, but the routing system cannot find those paths promptly. Although many schemes have been proposed to address the routing reliability problem, they often add significant more complexity into the system. The need for a more reliable inter-domain routing system without adding too much complexity calls for designing practical schemes to better exploit Internet path diversity and provide more reliable routing service. The increasing demands of providing value-added services in Internet also motivates the research work in this dissertation. Recently, network virtualization substrates and data centers are becoming important infrastructures. Network virtualization provides the ability to run multiple concurrent virtual networks in the same shared substrate. To better facilitate building application-specific networks so as to test and deploy network innovations for future Internet, a network virtualization platform must provide both high-degree of flexibility and high-speed packet forwarding in virtual networks. However, flexibility and forwarding performance are often tightly coupled issues in system design. Usually we have to sacrifice one in order to improve the other one. The lack of a platform that has both flexibility and good forwarding performance motivates the research in this dissertation to design network virtualization platforms to better support virtual networks with diverse functionalities in future Internet. The popularity of data centers in Internet also motivates this dissertation to studying scalable and cost-efficient data center networks. Data centers with a cluster of servers are already common places in Internet to host large scale networking applications, which require huge amount of computation and storage resources. To keep up with the performance requirements of those applications, a data center has to accommodate a large number of servers. As Internet evolves and more diverse applications emerge, the computation and storage requirements for data centers grow quickly. However, using the conventional interconnection structure is hard to scale the number of servers in data centers. Hence, it is of importance to design new interconnection structures for future data centers in Internet. Four interesting topics are explored in this dissertation: (i) accommodating diverse commercial agreements in inter-domain routing, (ii) exploiting the Internet AS-level path diversity, (iii) supporting diverse network data planes, and (iv) diverse interconnection networks for data centers. The first part of this dissertation explores accommodating diverse commercial agreements in inter-domain routing while guaranteeing global routing convergence, so as to provide more path diversity in Internet. The second part of this dissertation studies exploiting the path diversity in Internet by running multiple routing processes in parallel, which compute multiple paths and those paths can complement each other in case one path has problems when dynamics present in the routing system. The third part of this dissertation studies supporting concurrent networks with heterogeneous data plane functions via network virtualization. Two virtual network platforms are presented, which achieve both high-speed packet forwarding in each virtual network and high degree of flexibility for each virtual network to customize its data plane functions. The last part of this dissertation presents a new scalable interconnection structure for data center networks. The salient feature of this new interconnection structure is that it expands to any number of servers without requiring to physically upgrading the existing servers.
|
73 |
Cooperative caching for object storageKaynar Terzioglu, Emine Ugur 29 October 2022 (has links)
Data is increasingly stored in data lakes, vast immutable object stores that can be accessed from anywhere in the data center. By providing low cost and scalable storage, today immutable object-storage based data lakes are used by a wide range of applications with diverse access patterns. Unfortunately, performance can suffer for applications that do not match the access patterns for which the data lake was designed. Moreover, in many of today's (non-hyperscale) data centers, limited bisectional bandwidth will limit data lake performance. Today many computer clusters integrate caches both to address the mismatch between application performance requirements and the capabilities of the shared data lake, and to reduce the demand on the data center network. However, per-cluster caching;
i) means the expensive cache resources cannot be shifted between clusters based on demand,
ii) makes sharing expensive because data accessed by multiple clusters is independently cached by each of them,
and
iii) makes it difficult for clusters to grow and shrink if their servers are being used to cache storage.
In this dissertation, we present two novel data-center wide cooperative cache architectures, Datacenter-Data-Delivery Network (D3N) and Directory-Based Datacenter-Data-Delivery Network (D4N) that are designed to be part of the data lake itself rather than part of the computer clusters that use it. D3N and D4N distribute caches across the data center to enable data sharing and elasticity of cache resources where requests are transparently directed to nearby cache nodes. They dynamically adapt to changes in access patterns and accelerate workloads while providing the same consistency, trust, availability, and resilience guarantees as the underlying data lake. We nd that exploiting the immutability of object stores significantly reduces the complexity and provides opportunities for cache management strategies that were not feasible for previous cooperative cache systems for le or block-based storage.
D3N is a multi-layer cooperative cache that targets workloads with large read-only datasets like big data analytics. It is designed to be easily integrated into existing data lakes with only limited support for write caching of intermediate data, and avoiding any global state by, for example, using consistent hashing for locating blocks and making all caching decisions based purely on local information. Our prototype is performant enough to fully exploit the (5 GB/s read) SSDs and (40, Gbit/s) NICs in our system and improve the runtime of realistic workloads by up to 3x. The simplicity of D3N has enabled us, in collaboration with industry partners, to upstream the two-layer version of D3N into the existing code base of the Ceph object store as a new experimental feature, making it available to the many data lakes around the world based on Ceph.
D4N is a directory-based cooperative cache that provides a reliable write tier and a distributed directory that maintains a global state. It explores the use of global state to implement more sophisticated cache management policies and enables application-specific tuning of caching policies to support a wider range of applications than D3N. In contrast to previous cache systems that implement their own mechanism for maintaining dirty data redundantly, D4N re-uses the existing data lake (Ceph) software for implementing a write tier and exploits the semantics of immutable objects to move aged objects to the shared data lake. This design greatly reduces the barrier to adoption and enables D4N to take advantage of sophisticated data lake features such as erasure coding. We demonstrate that D4N is performant enough to saturate the bandwidth of the SSDs, and it automatically adapts replication to the working set of the demands and outperforms the state of art cluster cache Alluxio. While it will be substantially more complicated to integrate the D4N prototype into production quality code that can be adopted by the community, these results are compelling enough that our partners are starting that effort.
D3N and D4N demonstrate that cooperative caching techniques, originally designed for file systems, can be employed to integrate caching into today’s immutable object-based data lakes. We find that the properties of immutable object storage greatly simplify the adoption of these techniques, and enable integration of caching in a fashion that enables re-use of existing battle tested software; greatly reducing the barrier of adoption. In integrating the caching in the data lake, and not the compute cluster, this research opens the door to efficient data center wide sharing of data and resources.
|
74 |
OneSwitch Data Center ArchitectureSehery, Wile Ali 13 April 2018 (has links)
In the last two-decades data center networks have evolved to become a key element in improving levels of productivity and competitiveness for different types of organizations. Traditionally data center networks have been constructed with 3 layers of switches, Edge, Aggregation, and Core. Although this Three-Tier architecture has worked well in the past, it poses a number of challenges for current and future data centers.
Data centers today have evolved to support dynamic resources such as virtual machines and storage volumes from any physical location within the data center. This has led to highly volatile and unpredictable traffic patterns. Also The emergence of "Big Data" applications that exchange large volumes of information have created large persistent flows that need to coexist with other traffic flows. The Three-Tier architecture and current routing schemes are no longer sufficient for achieving high bandwidth utilization.
Data center networks should be built in a way where they can adequately support virtualization and cloud computing technologies. Data center networks should provide services such as, simplified provisioning, workload mobility, dynamic routing and load balancing, equidistant bandwidth and latency. As data center networks have evolved the Three-Tier architecture has proven to be a challenge not only in terms of complexity and cost, but it also falls short of supporting many new data center applications.
In this work we propose OneSwitch: A switch architecture for the data center. OneSwitch is backward compatible with current Ethernet standards and uses an OpenFlow central controller, a Location Database, a DHCP Server, and a Routing Service to build an Ethernet fabric that appears as one switch to end devices. This allows the data center to use switches in scale-out topologies to support hosts in a plug and play manner as well as provide much needed services such as dynamic load balancing, intelligent routing, seamless mobility, equidistant bandwidth and latency. / PHD / In the last two-decades data center networks have evolved to become a key element in improving levels of productivity and competitiveness for different types of organizations. Traditionally data center networks have been constructed with 3 layers of switches. This Three-Tier architecture has proven to be a challenge not only in terms of complexity and cost, but it also falls short of supporting many new data center applications.
In this work we propose OneSwitch: A switch architecture for the data center. OneSwitch supports virtualization and cloud computing technologies by providing services such as, simplified provisioning, workload mobility, dynamic routing and load balancing, equidistant bandwidth and latency.
|
75 |
Virtual power: um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS / Virtual power: a cost model based on the processor energy consumption per virtual machine in IaaS cloudsHinz, Mauro 29 September 2015 (has links)
Made available in DSpace on 2016-12-12T20:22:53Z (GMT). No. of bitstreams: 1
Mauro Hinz.pdf: 2658972 bytes, checksum: 50ee82c291499d5ddc390671e05329d4 (MD5)
Previous issue date: 2015-09-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The outsourcing of computing services has been through constant evolutions in the past years, due to the increase of demand for computing resources. Accordingly, data centers are the main suppliers of computing service and cloud-based computing services provide a new paradigm for the offer and consumption of these computing resources. A substantial motivator for using cloud computing is its pricing model, which enables to charge the customer only for the resources he used, thus adopting a pay-as-you-use cost model. Among cloud-based computing services, the service type Infrastructure-as-a-Service (IaaS) is the one mostly used by companies that would like to outsource their computing infrastructure. The IaaS service, in most cases, is offered through virtual machines. This paper revisits the cost models used by data centers and analyses the costs of supply of virtual machines based on IaaS. This analysis identifies that electricity represents a considerable portion of this cost and that much of the consumption comes from the use of processors in virtual machines, and that this aspect is not considered in the identified cost models. This paper describes the Virtual Power Model, a cost model based on energy consumption of the processor in cloud-based, virtual machines in IaaS. The model is based on the assumptions of energy consumption vs. processing load, among others, which are proven through experiments in a test environment of a small data center. As a result, the Virtual Power Model proves itself as a fairer pricing model for the consumed resources than the identified models. Finally, a case study is performed to compare the costs charged to a client using the cost model of Amazon for the AWS EC2 service and the same service charged using the Virtual Power Model. / A terceirização dos serviços de computação tem passado por evoluções constantes nos últimos anos em função do contínuo aumento na demanda por recursos computacionais. Neste sentido, os data centers são os principais fornecedores de serviço de computação e os serviços de computação em nuvem proporcionam um novo paradigma na oferta e consumo desses recursos computacionais. Um considerável motivador do uso das nuvens computacionais é o seu modelo de tarifação que possibilita a cobrança do cliente somente dos recursos que ele utilizou, adotando um modelo de custo do tipo pay-as-you-use. Dentre os serviços de computação em nuvem, o serviço do tipo IaaS (Infrastructure-as-a-Service) é um dos mais utilizados por empresas que desejam terceirizar a sua infraestrutura computacional. O serviço de IaaS, na grande maioria dos casos, é ofertado através de instâncias de máquinas virtuais. O presente trabalho revisita os modelos de custos empregados em data centers analisando a formação dos custos no fornecimento de máquina virtuais em nuvens baseadas em IaaS. Com base nesta análise identificasse que a energia elétrica possui uma parcela considerável deste custo e que boa parte deste consumo é proveniente do uso de processadores pelas máquinas virtuais, e que esse aspecto não é considerado nos modelos de custos identificados. Este trabalho descreve o Modelo Virtual Power, um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS. A constituição do modelo está baseada nas premissas de consumo de energia vs. carga de processamento, entre outros, que são comprovados através de experimentação em um ambiente de testes em um data center de pequeno porte. Como resultado o Modelo Virtual Power mostra-se mais justo na precificação dos recursos consumidos do que os modelos identificados. Por fim, é realizado um estudo de caso comparando os custos tarifado a um cliente empregando o modelo de custo da Amazon no serviço AWS EC2 e o mesmo serviço tarifado utilizando o Modelo Virtual Power.
|
76 |
EXPLOITING THE SPATIAL DIMENSION OF BIG DATA JOBS FOR EFFICIENT CLUSTER JOB SCHEDULINGAkshay Jajoo (9530630) 16 December 2020 (has links)
With the growing business impact of distributed big data analytics jobs, it has become crucial to optimize their execution and resource consumption. In most cases, such jobs consist of multiple sub-entities called tasks and are executed online in a large shared distributed computing system. The ability to accurately estimate runtime properties and coordinate execution of sub-entities of a job allows a scheduler to efficiently schedule jobs for optimal scheduling. This thesis presents the first study that highlights spatial dimension, an inherent property of distributed jobs, and underscores its importance in efficient cluster job scheduling. We develop two new classes of spatial dimension based algorithms to<br>address the two primary challenges of cluster scheduling. First, we propose, validate, and design two complete systems that employ learning algorithms exploiting spatial dimension. We demonstrate high similarity in runtime properties between sub-entities of the same job by detailed trace analysis on four different industrial cluster traces. We identify design challenges and propose principles for a sampling based learning system for two examples, first for a coflow scheduler, and second for a cluster job scheduler.<br>We also propose, design, and demonstrate the effectiveness of new multi-task scheduling algorithms based on effective synchronization across the spatial dimension. We underline and validate by experimental analysis the importance of synchronization between sub-entities (flows, tasks) of a distributed entity (coflow, data analytics jobs) for its efficient execution. We also highlight that by not considering sibling sub-entities when scheduling something it may also lead to sub-optimal overall cluster performance. We propose, design, and implement a full coflow scheduler based on these assertions.
|
77 |
Green Computing – Power Efficient Management in Data Centers Using Resource Utilization as a Proxy for PowerDa Silva, Ralston A. January 2009 (has links)
No description available.
|
78 |
[en] ANOMALY DETECTION IN DATA CENTER MACHINE MONITORING METRICS / [pt] DETECÇÃO DE ANOMALIAS NAS MÉTRICAS DAS MONITORAÇÕES DE MÁQUINAS DE UM DATA CENTERRICARDO SOUZA DIAS 17 January 2020 (has links)
[pt] Um data center normalmente possui grande quantidade de máquinas com diferentes configurações de hardware. Múltiplas aplicações são executadas e software e hardware são constantemente atualizados. Para evitar a interrupção de aplicações críticas, que podem causar grandes prejuízos financeiros, os administradores de sistemas devem identificar e corrigir as falhas o mais cedo possível. No entanto, a identificação de falhas em data centers de produção muitas vezes ocorre apenas quando as aplicações e serviços já estão indisponíveis. Entre as diferentes causas da detecção tardia de falhas estão o uso técnicas de monitoração baseadas apenas em thresholds. O aumento crescente na complexidade de aplicações que são constantemente atualizadas torna difícil a configuração de thresholds ótimos para cada métrica e servidor. Este trabalho propõe o uso de técnicas de detecção de anomalias no lugar de técnicas baseadas em thresholds. Uma anomalia é um comportamento do sistema que é incomum e significativamente
diferente do comportamento normal anterior. Desenvolvemos um algoritmo para detecção de anomalias, chamado DASRS (Decreased Anomaly Score by Repeated Sequence) que analisa em tempo real as métricas coletadas de servidores de um data center de produção. O DASRS apresentou excelentes
resultados de acurácia, compatível com os algoritmos do estado da arte, além de tempo de processamento e consumo de memória menores. Por esse motivo, o DASRS atende aos requisitos de processamento em tempo real de um grande volume de dados. / [en] A data center typically has a large number of machines with different hardware configurations. Multiple applications are executed and software and hardware are constantly updated. To avoid disruption of critical applications, which can cause significant financial loss, system administrators should identify and correct failures as early as possible. However, fault-detection in production data centers often occurs only when applications and services are already unavailable. Among the different causes of late fault-detection are the use of thresholds-only monitoring techniques. The increasing complexity of constantly updating applications makes it difficult to set optimal thresholds for each metric and server. This paper proposes the use of anomaly detection techniques in place of thresholds based techniques. An anomaly is a system behavior that is unusual and significantly different from the previous normal behavior. We have developed an anomaly detection algorithm called Decreased Anomaly Score by Repeated Sequence (DASRS) that analyzes real-time metrics collected from servers in a production data center. DASRS has showed excellent accuracy results, compatible with state-of-the-art algorithms, and reduced processing time and memory
consumption. For this reason, DASRS meets the real-time processing requirements of a large volume of data.
|
79 |
Modelling of organic data centers / Modellering av ekologiska datorhallarSandström, Mimmi January 2020 (has links)
I det här examensarbetet undersöks möjligheten att återvinna termisk energi genom att driva en så kallad ekologisk datorhall. Denna uppstår genom en integration mellan en storskalig högpresterande datorhall och ett växthus. Den termiska energin, eller spillvärme som den också kallas, genereras i stora mängder som en biprodukt av kylning av datahallar världen över. Avsikten är att använda spillvärme som genereras för att täcka det årliga energibehovet av ett växthus. Syftet med den ekologiska datahallen är att maximera vinsten baserat på alla tre hållbarhetspelarna; den ekonomiska, den miljömässiga och den sociala. Dessutom är avsikten att minska den stora elektriska energiförbrukningen för datahallen genom att applicera strategier för “fri kyla”. Examensarbetets mål är att undersöka den tekniska genomförbarheten för en typiskt ekologisk datahall, lokaliserad på tre olika platser i Sverige; Luleå, Stockholm och Lund. Målet är även att ta reda på effekterna av datahallens och växthusets symbios. Forskningsproblemen som ska besvaras är för det första, vad den optimala storleken för en ekologisk datahall är för att maximalt utnyttja den genererade spillvärmen. Där datahallen är lokaliserad på ovan nämnda platser. För det andra, var i Sverige en ekologisk datahall skulle placeras för maximal vinst. Slutligen undersöks vilka kapital- och driftskostnader som relateras till en typisk ekologisk datahall samt vad intäkterna och den sociala avkastningen är på investeringen. För att finna lösningen på forskningsproblemen så modelleras de tekniska och ekonomiska förutsättningarna för en ekologisk datahall med hjälp av programvaran Microsoft Excel. Verksamheten analyseras även ur ett hållbarhetsperspektiv och marknaden för liknande projekt undersöks. Från examensarbetet framgår det att alla undersökta platser i Sverige är lämpliga för implementering av fri kyla. Den optimala placeringen av en typisk ekologisk datahall skulle dock vara i Luleå. Detta är baserat på fler bidragande faktorer, inklusive lågt pris på el och mark samt hög tillgång till naturresurser. Dessutom är det inte lika många konkurrenter med liknande affärsidéer på den lokala marknaden jämfört med exempelvis Stockholm, därav minskar rivaliteten att vara det största lokala bolaget. Slutligen bör man överväga att i framtiden arbeta med att variera växthusets tekniska- och jordbruksaspekter för att perfekt motsvara datahallens specifikationer på den aktuella platsen. / In the master thesis, the opportunity of recovering thermal energy by operating an organic data center is investigated. This thermal energy, or waste heat as it is called, is generated as a byproduct of the cooling of large-scale high-performance computing centers. The intent is to use this waste heat to cover for the energy demand of a greenhouse. The purpose of the organic data center is to integrate a large data center with a greenhouse to maximise the profit on all the three pillars of sustainability; the financial, the environmental and the social pillar. Moreover, the massive power consumption of the large data centers will be reduced by the implementation of free cooling. The thesis aims at examining the technical feasibility of a typical organic data center, placed at three locations in Sweden; Luleå, Stockholm and Lund. Further, to find out what the effects of the data center and greenhouse symbiosis. The research problems to be answered are firstly, what the optimal dimension of an organic data center is for the maximum waste heat utilisation, if it is placed at the locations mentioned. Secondly, where the organic data center ideally would be placed in Sweden for a maximum profit. Lastly, what the capital and operational expenses are for the organic data center as well as the revenue and social return of investment. Solving the research problems is done by modelling the technical and financial conditions of the organic data center using the software Microsoft Excel, as well as analysing the business from a sustainability perspective. The market for similar projects is also investigated. From the thesis work, it is found that all locations are suitable for the implementation of free cooling. However, the optimal localization of a typical organic data center would be in Luleå, based on several contributing factors, including the low price for electricity and land, and high access to natural resources. Moreover, there is not as many competitors with the same business idea on the local market as for instance, in Stockholm. This reduces the rivalry to be the biggest local business. Finally, varying the technical and agricultural aspects of the greenhouse to perfectly match the data center at the current location should be considered in future work.
|
80 |
The SoftDecom EngineBenitez, Jesus, Guadiana, Juan, Torres, Miguel, Creel, Larry 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / The software decommutator was recently fielded at White Sands to address the requirements
of a new missile test program. This software decommutator is rewritten as a
simple C program Function or Class with a simple interface. The function and an Interface
Control Definition (ICD) comprise the SoftDecom Engine (SDE). This paper addresses
how an SDE can deliver Enterprise Wide Portability, not only that of the SDE,
but more importantly a test program!s Verification & Validation (V&V).
The crux of the portability issue is reduced to defining the interface of the SDE. In the
simplest manifestation only two interfaces are needed and one is a given. The input
structure is defined by the telemeter minor frame with time appended if desired. The
output structure is no more than an array containing the parameters required. The ICD
could be generalized into a standard for most applications, but that isn!t necessary, as
the structures are simple, hence easy to adapt to anyway.
This new paradigm!s importance will flourish on industries irreversible migration to faster
and more complex telemeters. The paper reviews the relative ease that software exhibits
when addressing very complex telemeters. With confidence it may be said “ if the
telemeter format can be described in writing, it can be processed real time”. Also discussed
are tasks that normally require specialized or customized and expensive equipment
for example, merged streams, complex simulations and recording and reproducing
PCM (sans recorder). Hopefully, your creativity will be engaged as ours has been.
|
Page generated in 0.1085 seconds