• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 17
  • 14
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 156
  • 156
  • 43
  • 34
  • 28
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Transmission of digital images using data-flow architecture

Haddad, Nicholas January 1985 (has links)
No description available.
42

Scalable and Declarative Information Extraction in a Parallel Data Analytics System

Rheinländer, Astrid 06 July 2017 (has links)
Informationsextraktions (IE) auf sehr großen Datenmengen erfordert hochkomplexe, skalierbare und anpassungsfähige Systeme. Obwohl zahlreiche IE-Algorithmen existieren, ist die nahtlose und erweiterbare Kombination dieser Werkzeuge in einem skalierbaren System immer noch eine große Herausforderung. In dieser Arbeit wird ein anfragebasiertes IE-System für eine parallelen Datenanalyseplattform vorgestellt, das für konkrete Anwendungsdomänen konfigurierbar ist und für Textsammlungen im Terabyte-Bereich skaliert. Zunächst werden konfigurierbare Operatoren für grundlegende IE- und Web-Analytics-Aufgaben definiert, mit denen komplexe IE-Aufgaben in Form von deklarativen Anfragen ausgedrückt werden können. Alle Operatoren werden hinsichtlich ihrer Eigenschaften charakterisiert um das Potenzial und die Bedeutung der Optimierung nicht-relationaler, benutzerdefinierter Operatoren (UDFs) für Data Flows hervorzuheben. Anschließend wird der Stand der Technik in der Optimierung nicht-relationaler Data Flows untersucht und herausgearbeitet, dass eine umfassende Optimierung von UDFs immer noch eine Herausforderung ist. Darauf aufbauend wird ein erweiterbarer, logischer Optimierer (SOFA) vorgestellt, der die Semantik von UDFs mit in die Optimierung mit einbezieht. SOFA analysiert eine kompakte Menge von Operator-Eigenschaften und kombiniert eine automatisierte Analyse mit manuellen UDF-Annotationen, um die umfassende Optimierung von Data Flows zu ermöglichen. SOFA ist in der Lage, beliebige Data Flows aus unterschiedlichen Anwendungsbereichen logisch zu optimieren, was zu erheblichen Laufzeitverbesserungen im Vergleich mit anderen Techniken führt. Als Viertes wird die Anwendbarkeit des vorgestellten Systems auf Korpora im Terabyte-Bereich untersucht und systematisch die Skalierbarkeit und Robustheit der eingesetzten Methoden und Werkzeuge beurteilt um schließlich die kritischsten Herausforderungen beim Aufbau eines IE-Systems für sehr große Datenmenge zu charakterisieren. / Information extraction (IE) on very large data sets requires highly complex, scalable, and adaptive systems. Although numerous IE algorithms exist, their seamless and extensible combination in a scalable system still is a major challenge. This work presents a query-based IE system for a parallel data analysis platform, which is configurable for specific application domains and scales for terabyte-sized text collections. First, configurable operators are defined for basic IE and Web Analytics tasks, which can be used to express complex IE tasks in the form of declarative queries. All operators are characterized in terms of their properties to highlight the potential and importance of optimizing non-relational, user-defined operators (UDFs) for dataflows. Subsequently, we survey the state of the art in optimizing non-relational dataflows and highlight that a comprehensive optimization of UDFs is still a challenge. Based on this observation, an extensible, logical optimizer (SOFA) is introduced, which incorporates the semantics of UDFs into the optimization process. SOFA analyzes a compact set of operator properties and combines automated analysis with manual UDF annotations to enable a comprehensive optimization of data flows. SOFA is able to logically optimize arbitrary data flows from different application areas, resulting in significant runtime improvements compared to other techniques. Finally, the applicability of the presented system to terabyte-sized corpora is investigated. Hereby, we systematically evaluate scalability and robustness of the employed methods and tools in order to pinpoint the most critical challenges in building an IE system for very large data sets.
43

Augmenting Serial Streaming Telemetry with iNET Data Delivery

Reinwald, Carl 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Incorporating network-based telemetry components into a flight test article creates new types of network-based data flows between a test article and a telemetry ground station. The emerging integrated Network Enhanced Telemetry (iNET) Standard defines new, network-based data delivery protocols which can produce various network data flows. Augmenting existing Serial Streaming Telemetry (SST) data flows with these network-based data flows is crucial to enhancing current flight test capabilities. This paper briefly introduces the network protocols referenced in the iNET Standard and then identifies the various data flows generated by network-based components which comply with the iNET Standard. Several combinations of SST and TmNS data flows are presented and the enhanced telemetry capabilities provided by each combination are identified. Identifying time intervals of unused telemetry network bandwidth explicitly for reallocation to other test articles is also addressed.
44

Adaptation of a Loral ADS 100 as a Remote Ocean Buoy Maintenance System

Sharp, Kirk, Thompson, Lorraine Masi 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The Naval Ocean Research and Development Activity (NORDA) has adapted the Loral Instrumentation Advanced Decommutation system (ADS 100) as a portable maintenance system for one of its remotely deployable buoy systems. This particular buoy system sends up to 128 channels of amplified sensor data to a centralized A/D for formatting and storage on a high density digital recorder. The resulting tapes contain serial PCM data in a format consistent with IRIG Standard 106-87. Predictable and correctable perturbations exist within the data due to the quadrature multiplexed telemetry system. The ADS 100 corrects for the perturbations of the telemetry system and provides the user with diagnostic tools to examine the stored data stream and determine the operational status of the buoy system prior to deployment.
45

ADAPTATION OF A LORAL ADS 100 AS A REMOTE OCEAN BUOY MAINTENANCE SYSTEM

Sharp, Kirk, Thompson, Lorraine Masi 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The Naval Ocean Research and Development Activity (NORDA) has adapted the Loral Instrumentation Advanced Decommutation system (ADS 100) as a portable maintenance system for one of its remotely deployable buoy systems. This particular buoy system sends up to 128 channels of amplified sensor data to a centralized A/D for formatting and storage on a high density digital recorder. The resulting tapes contain serial PCM data in a format consistent with IRIG Standard 106-87. Predictable and correctable perturbations exist within the data due to the quadrature multiplexed telemetry system. The ADS 100 corrects for the perturbations of the telemetry system and provides the user with diagnostic tools to examine the stored data stream and determine the operational status of the buoy system prior to deployment.
46

STRUCTURED SOFTWARE DESIGN IN A REAL-TIME CONTROL APPLICATION

DeBrunner, Keith E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 1984 / Riviera Hotel, Las Vegas, Nevada / Software for real-time (time critical) control applications has been shown in military and industry studies to be a very expensive type of software effort. This type of software is not typically addressed in discussions of software architecture design methods and techniques, therefore the software engineer is usually left with a sparse design “tool kit” when confronted with overall system design involving time critical and/or control problems. This paper outlines the successful application of data flow and transaction analysis design methods to achieve a structured yet flexible software architecture for a fairly complex antenna controller used in automatic tracking antenna systems. Interesting adaptations of, and variations on, techniques described in the literature are discussed; as are issues of modularity, coupling, morphology, global data handling, and evolution (maintenance). Both positive and negative aspects of this choice of design method are outlined, and the importance of a capable real-time executive and conditional compilation and assembly is stressed.
47

Telemetry Data Processing: A Modular, Expandable Approach

Devlin, Steve 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada / The growing complexity of missle, aircraft, and space vehicle systems, along with the advent of fly-by-wire and ultra-high performance unstable airframe technology has created an exploding demand for real time processing power. Recent VLSI developements have allowed addressing these needs in the design of a multi-processor subsystem supplying 10 MIPS and 5 MFLOPS per processor. To provide up to 70 MIPS a Digital Signal Processing subsystem may be configured with up to 7 Processors. Multiple subsystems may be employed in a data processing system to give the user virtually unlimited processing power. Within the DSP module, communication between cards is over a high speed, arbitrated Private Data bus. This prevents the saturation of the system bus with intermediate results, and allows a multiple processor configuration to make full use of each processor. Design goals for a single processor included executing number system conversions, data compression algorithms and 1st order polynomials in under 2 microseconds, and 5th order polynomials in under 4 microseconds. The processor design meets or exceeds all of these goals. Recently upgraded VLSI is available, and makes possible a performance enhancement to 11 MIPS and 9 MFLOPS per processor with reduced power consumption. Design tradeoffs and example applications are presented.
48

Improving dynamic analysis with data flow analysis

Chang, Walter Chochen 26 October 2010 (has links)
Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation. For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle. For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored. / text
49

Spécification formelle de systèmes temps réel répartis par une approche flots de données à contraintes temporelles / Formal specification of distributed real time systems using an approach based on temporally constrained data flows

Le Berre, Tanguy 23 March 2010 (has links)
Une définition des systèmes temps réel est que leur correction dépend de la correction fonctionnelle mais aussi du temps d'exécution des différentes opérations. Les propriétés temps réels sont alors exprimées comme des contraintes temporelles sur les opérations du système. Nous proposons dans cette thèse un autre point de vue où la correction est définie relativement à la validité temporelle des valeurs prises par les variables du système et aux flots de données qui parcourent le système. Pour définir ces conditions de validité, nous nous intéressons au rythme de mise à jour des variables mais aussi aux liens entre les valeurs des différentes variables du système. Une relation dite d'observation est utilisée pour modéliser les communications et les calculs du système qui définissent les liens entre les variables. Un ensemble de relations d'observation modélise l'architecture et les flots de données du système en décrivant les chemins de propagation des valeurs dans le système. Les propriétés temps réels sont alors exprimées comme des contraintes sur ces chemins de propagation permettant d'assurer la validité temporelle des valeurs prises par les variables. La validité temporelle d'une valeur est définie selon la validité temporelle des valeurs des autres variables dont elle dépend et selon le décalage temporel logique ou événementiel introduit par les communications ou les calculs le long des chemins de propagation. Afin de prouver la satisfiabilité d'une spécification définie par une telle architecture et de telles propriétés, nous construisons un système de transitions à état fini bisimilaire à la spécification. L'existence de ce système fini est justifiée par des bornes sur le décalage temporel entre les variables du système. Il est alors possible d'explorer les exécutions définies par ce système de transitions afin de prouver l'existence d'exécutions infinies satisfaisant la spécification. / Real time systems are usually defined as systems where the total correctness of an operation depends not only on its logical correctness, but also on the execution time. Under this definition, time constraints are defined according to system operations. Another definition of real time systems is centered on data where the correctness of a system depends on the timed correctness of its data and of the data flows across the system. i.e. we expect the values taken by the variable to be regularly renewed and to be consistent with the environment and the other variables. I propose a modeling framework based on this later definition. This approach allows users to focus on specifying time constraints attached to data and to postpone task and communication scheduling matters. The timed requirements are not expressed as constraints on the implantation mechanism, but on the relations binding the system’s variables. These relations between data are expressed in terms of a so called observation relation which abstracts the relation between the values that are taken by some variables, the set of sources and the image. This relation abstracts the communication as well as the computational operations and a set of observation relations models the system architecture and the data flows by defining the paths along which values of sources are propagated to build the values of an image. The real time properties are expressed as constraints on the propagation paths and state the temporal validity of the values. This temporal validity is defined by the time shift between the source and the image, and specifies the propagation of timely sound values along the path to build temporally correct values of the system outputs. At this level of abstraction, the designer gives a specification of the system based on timed properties about the timeline of data such as their freshness, stability, latency etc. In order to prove the feasibility of an observation-based model, a finite state transition system bi-similar with the specification is built. The existence of a finite bi-similar system is deduced from the bounded time shift between the variables. The existence of an infinite execution in this system proves the feasibility of the specification.
50

Integrating Visual Data Flow Programming with Data Stream Management

Melander, Lars January 2016 (has links)
Data stream management and data flow programming have many things in common. In both cases one wants to transfer possibly infinite sequences of data items from one place to another, while performing transformations to the data. This Thesis focuses on the integration of a visual programming language with a data stream management system (DSMS) to support the construction, configuration, and visualization of data stream applications. In the approach, analyses of data streams are expressed as continuous queries (CQs) that emit data in real-time. The LabVIEW visual programming platform has been adapted to support easy specification of continuous visualization of CQ results. LabVIEW has been integrated with the DSMS SVALI through a stream-oriented client-server API. Query programming is declarative, and it is desirable to make the stream visualization declarative as well, in order to raise the abstraction level and make programming more intuitive. This has been achieved by adding a set of visual data flow components (VDFCs) to LabVIEW, based on the LabVIEW actor framework. With actor-based data flows, visualization of data stream output becomes more manageable, avoiding the procedural control structures used in conventional LabVIEW programming while still utilizing the comprehensive, built-in LabVIEW visualization tools. The VDFCs are part of the Visual Data stream Monitor (VisDM), which is a client-server based platform for handling real-time data stream applications and visualizing stream output. VDFCs are based on a data flow framework that is constructed from the actor framework, and are divided into producers, operators, consumers, and controls. They allow a user to set up the interface environment, customize the visualization, and convert the streaming data to a format suitable for visualization. Furthermore, it is shown how LabVIEW can be used to graphically define interfaces to data streams and dynamically load them in SVALI through a general wrapper handler. As an illustration, an interface has been defined in LabVIEW for accessing data streams from a digital 3D antenna. VisDM has successfully been tested in two real-world applications, one at Sandvik Coromant and one at the Ångström Laboratory, Uppsala University. For the first case, VisDM was deployed as a portable system to provide direct visualization of machining data streams. The data streams can differ in many ways as do the various visualization tasks. For the second case, data streams are homogenous, high-rate, and query operations are much more computation-demanding. For both applications, data is visualized in real-time, and VisDM is capable of sufficiently high update frequencies for processing and visualizing the streaming data without obstructions. The uniqueness of VisDM is the combination of a powerful and versatile DSMS with visually programmed and completely customizable visualization, while maintaining the complete extensibility of both.

Page generated in 0.0718 seconds