• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 260
  • 49
  • 21
  • 9
  • 7
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 378
  • 327
  • 164
  • 119
  • 68
  • 49
  • 37
  • 34
  • 33
  • 31
  • 30
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Real-time computing in a networking environment: an air traffic control system case study

Guo, Dahai 01 July 2001 (has links)
No description available.
252

Automatic construction and occlusion sensitive selection of level-of-detail models for procedurally modeled plants

Johnston, Jaren 01 July 2000 (has links)
No description available.
253

Component based software engineering to design real-time software

Bhatia, Manu 01 July 2002 (has links)
No description available.
254

Mixed-service environment for distributed real-time systems

Srinivasan, Ramakrishna 01 July 2003 (has links)
No description available.
255

The design of a real-time parallel processor synthetic aperture radar system

Roche, Michael William 01 July 2001 (has links)
No description available.
256

MIGRATING FROM A VAX/VMS TO AN INTEL/WINDOWS-NT BASED GROUND STATION

Penna, Sergio D., Rios, Domingos B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Upgrading or replacing production systems is always a very resource-consuming task, in particular if the systems being replaced are quite specialized, such as those serving any Flight Test Ground Station. In the recent past a large number of Ground Station systems were based in Digital’s VAX/VMS architecture. The computer industry then expanded very fast and by 1990 realtime PCM data processing systems totally dependent on hardware and software designed for IBM-PC compatible micro-computers were becoming available. A complete system replacement in a typical Ground Station can take from one to several years to become a reality. It depends on how complex the original system is, how complex the resulting system needs to be, how much resources are available to support the operation, how soon the organization needs it, etc. This paper intends to review the main concerns encountered during the replacement of a typical VAX/VMS-based by an Intel-Windows NT-based Ground Station. It covers the transition from original requirements to totally new requirements, from mini-computers to micro-computers, from DMA to high-speed LAN data transfers, while conserving some key architectural features. This 8-month development effort will expand EMBRAER’s capability in acquiring, processing and archiving PCM data in the next few years at a lower cost, while preserving compatibility with old legacy flight test data.
257

Estudo de resiliência em comunicação entre sistemas Multirrobôs utilizando HLA

Simão, Rivaldo do Ramos 04 March 2016 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-21T12:31:22Z No. of bitstreams: 1 arquivototal.pdf: 1454502 bytes, checksum: a03fd3df4d29b47b79ab2d8b0d3d9625 (MD5) / Made available in DSpace on 2017-08-21T12:31:22Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1454502 bytes, checksum: a03fd3df4d29b47b79ab2d8b0d3d9625 (MD5) Previous issue date: 2016-03-04 / Cooperation in a multi robot system has become a challenge to be overcome and turned into one of the biggest incentives for researchers in this area because communication appears as one of the most important requirements. This study aims to investigate the feasibility of using the distributed simulation environment called HLA (High-level Architecture) in the process of communication between members of a system with three and five computers. It simulates a multi-robot system in order to verify its behavior when one of them is replaced for another with limited processing power. Thus, a new communication approach based on HLA middleware was developed. In this approach, the robots adapt their transmission rate according to the performance of other robots. The accomplished experiments have shown that the real-time requirements of a robot soccer application have been achieved using this approach. It points to a new possibility of real-time communication between robots. On the exposed, in one experiment, a direct comparison was made between RTDB (Real-time database) middleware and the approach presented. It was verified that, in some contexts, the adaptive HLA is about 5 to 12 percent more efficient than RTDB. / A cooperação em um um sistema mutirrobôs tem se tornado um desafio a ser superado e se transformado em um dos maiores incentivos para os pesquisadores desta área, pois a comunicação se apresenta como um dos mais importantes requisitos. O objetivo deste trabalho foi de investigar a viabilidade do uso do ambiente de simulação distribuída chamado de HLA, no processo de comunicação entre membros de um sistema com três e cinco computadores, simulando um sistema multirrobôs, de modo a verificar seu comportamento, quando um deles é substituído por outro com poder de processamento reduzido. Assim, uma nova abordagem de comunicação com base no middleware HLA foi desenvolvida. Nessa nova abordagem, os robôs adaptam sua taxa de transmissão com base no desempenho de outros robôs. Experimentos demonstraram que os requisitos de tempo real de uma aplicação de futebol de robôs foram alcançados usando-se essa abordagem, o que aponta para uma nova possibilidade de comunicação em tempo real entre robôs. Diante do exposto, em um dos experimentos, foi feita uma comparação direta entre o middleware RTDB e a abordagem apresentada. Constatou-se que o HLA adaptativo, em alguns cenários, é mais eficiente entre 5% e 12% do que o RTDB.
258

A computational architecture for real-time systems

Mostert, Sias 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: The engineering of dependable real-time systems for mission critical applications is a resource intensive and error prone process. Achieving dependability requires a general consensus on the correctness of a system with regard to its intended function. For a consensus to be achieved, the properties of the system must be well understood which, in turn, requires consensus on a rigorously defined computational architecture. There is currently no single agreed upon computational architecture at the application level which can serve as a common denominator for the design and implementation of real-time systems. It is the thesis of this dissertation that a rigorous computational architecture, applicable from design to implementation, enables engineers to better understand software for real-time systems. To substantiate this claim, the real-time data flow architecture RDF with its notation allowing the description of complete systems from design to implementation will be explored. Four distinct research areas for improving the engineering process of real-time systems are dealt with in the dissertation: 1) the development of an architecture for real-time systems being suitable for design and implementation in software and hardware, 2) the consolidation of a number of graphical languages into a graphical notation for functional specification, design and construction of real-time systems, 3) the development of a simple processor architecture for the execution of real-time applications, and 4) and the evaluation of the architecture in the framework of a microsatellite case study. In particular, the following original contributions are made: 1) the firing semantics of data flow systems are expanded to include disjunctive firing semantics in a novel way in addition to the classical conjunctive firing semantics, 2) the inherent real-time data flow property, Le. that a receiving task must be ready to receive the next incoming message when it is sent, is extended to the synchronous data flow model, 3) a notation for describing all properties of real-time systems is defined 'with the real-time data flow language RDF as base language, 4) two hardware processor architectures are introduced that offer one-to-one correspondence between design and implementation and, thus, reduce the semantic gap between design language and program execution, and 5) the class of systems that can be modelled with data flow architectures is shown to include control systems and data flow systems. The language set and processor architecture were applied to certain aspects of the SUNSAT microsatellite project. / AFRIKAANSE OPSOMMING:Die skep van betroubare intydsestelsels vir missie kritiese toepassings is 'n proses wat baie hulpbronne verg en waarin maklik 'n fout gemaak kan word. Om 'n betroubare stelsel te skep vereis 'n konsensus oor die korrektheid van 'n stelsel, wat bereik word wanneer die eienskappe van die stelsel goed verstaan word. Dit vereis op sy beurt weer 'n konsensus oor 'n goed gedefinieerde berekenings argitektuur. Daar is tans geen enkel ooreengekome berekenings argitektuur op die toepassingsvlak wat kan dien as 'n gemeenskaplike voertuig vir die ontwerp en implementering van intydsestelsels nie. Dit is die hipotese van die proefskrif dat 'n berekenings argitektuur met 'n streng basis, wat toegepas kan word vanaf ontwerp tot implementering, ingenieurs in staat sal stel om intydsestelsels beter te kan verstaan. Die hipotese word ondersoek deur die intydse datavloei argitektuur, RDF, te gebruik om 'n stelsel vanaf ontwerp tot implementering te beskryf. Daar is vier spesifieke navorsings areas ter verbetering van die ingenieurswese proses vir intydsestelsels, wat in die proefskrif aangespreek word: 1) die ontwikkelling van 'n argitektuur vir intydsestelsels wat geskik is vir die ontwerp en implementering in programmatuur en apparatuur, 2) . die konsolidering van 'n aantal grafiese tale in 'n grafiese notasie vir die funksionele spesifikasie, ontwerp en implementering van intydsestelsels, 3) die ontwikkelling van 'n eenvoudige verwerker argitektuur vir die uitvoering van intydse toepassings en 4) die evaluering van die argitektuur in die konteks van 'n mikrosatelliet gevallestudie. Die volgende oorspronklike bydraes word gemaak: 1) die sneller voorwaardes vir datavloei stelsels word uitgebrei met 'n disjunktiewe patroon saam met die tradisionele konjunktiewe patroon, 2) die inherente intydse datavloei eienskap, n.l. dat'n taak wat boodskappe ontvang, alle verwerking wat met 'n vorige boodskap gepaard gegaan het moet afhandel, voordat 'n volgende boodskap ontvang word, word uitgebrei na die sinkrone datavloei model, 3) 'n notasie om al die eienskappe van 'n intydsestelsel te beskryf word gedefinieer met RDF as die basis taal, 4) twee apparatuur verwerker argitekture word beskryf wat 'n een-tot-een kartering aanbied tussen die ontwerp en die implementering, en wat gevolglik die semantiese gaping verklein tussen ontwerpstaal en die uitvoeringsargitektuur en 5) die klasse van stelsels wat gemodelleer kan word met RDF sluit beheerstelsels en datavloeistelsels in. Die grafiese notasie en verwerker argitektuur was toegepas op sekere aspekte van die SUNSAT mikrosatelliet projek.
259

Improved regulatory oversight using real-time data monitoring technologies in the wake of Macondo

Carter, Kyle Michael 10 October 2014 (has links)
As shown by the Macondo blowout, a deepwater well control event can result in loss of life, harm to the environment, and significant damage to company and industry reputation. Consistent adherence to safety regulations is a recurring issue in deepwater well construction. The two federal entities responsible for offshore U.S. safety regulation are the Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE) and the U.S. Coast Guard (USCG), with regulatory authorities that span well planning, drilling, completions, emergency evacuation, environmental response, etc. With such a wide range of rules these agencies are responsible for, safety compliance cannot be comprehensively verified with the current infrequency of on-site inspections. Offshore regulation and operational safety could be greatly improved through continuous remote real-time data monitoring. Many government agencies have adopted monitoring regimes dependent on real-time data for improved oversight (e.g. NASA Mission Control, USGS Earthquake Early Warning System, USCG Vessel Traffic Services, etc.). Appropriately, real-time data monitoring was either re-developed or introduced in the wake of catastrophic events within those sectors (e.g. Challenger, tsunamis, Exxon Valdez, etc.). Over recent decades, oil and gas operators have developed Real-Time Operations Centers (RTOCs) for continuous, pro-active operations oversight and remote interaction with on-site personnel. Commonly seen as collaborative hubs, RTOCs provide a central conduit for shared knowledge, experience, and improved decision-making, thus optimizing performance, reducing operational risk, and improving safety. In particular, RTOCs have been useful in identifying and mitigating potential well construction incidents that could have resulted in significant non-productive time and trouble cost. In this thesis, a comprehensive set of recommendations is made to BSEE and USCG to expand and improve their regulatory oversight activities through remote real-time data monitoring and application of emerging real-time technologies that aid in data acquisition and performance optimization for improved safety. Data sets and tools necessary for regulators to effectively monitor and regulate deepwater operations (Gulf of Mexico, Arctic, etc.) on a continuous basis are identified. Data from actual GOM field cases are used to support the recommendations. In addition, the case is made for the regulator to build a collaborative foundation with deepwater operators, academia and other stakeholders, through the employment of state-of-the-art knowledge management tools and techniques. This will allow the regulator to do “more with less”, in order to address the fast pace of activity expansion and technology adoption in deepwater well construction, while maximizing corporate knowledge and retention. Knowledge management provides a connection that can foster a truly collaborative relationship between regulators, industry, and non-governmental organizations with a common goal of safety assurance and without confusing lines of authority or responsibility. This solves several key issues for regulators with respect to having access to experience and technical know-how, by leveraging industry experts who would not normally have been inaccessible. On implementation of the proposed real-time and knowledge management technologies and workflows, a phased approach is advocated to be carried out under the auspices of the Center for Offshore Safety (COS) and/or the Offshore Energy Safety Institute (OESI). Academia can play an important role, particularly in early phases of the program, as a neutral playing ground where tools, techniques and workflows can be tried and tested before wider adoption takes place. / text
260

Development amd implementation of a real-time observer model for mineral processing circuits.

Vosloo, John-Roy Ivy. January 2004 (has links)
Mineral processing plan ts, such as LONMIN's Eastern Platinum B-stream, typically have few on-line measurements, and key measures of performance such as grade only become available after samples have been analysed in the laboratory. More immediate feedback from a dynamic observer model promises enhanced understanding of the process, and facilitates prompt corrective actions, whether in open or closed loop . Such plant s easily enter sub-optimal modes such as large , uselessly re-circulating loads as the feed conditions change. Interpretation of such modes from key combinations of the variables deduced by an observer model , using a type of expert system, would add another level of intelligence to benefit operation. The aim of this thesis was to develop and implement a dynamic observer model of the LONMIN Eastern Platinum B-Stream into one of the existing control platforms available at the plant , known as PlantStar®, developed by MINTEK. The solution of the system of differential and algebraic equations resulting from this type of flowsheet modelling is based on an extended Kalman filter, which is able to dynamically reconcile any measurements which are presented to it, in real time. These measurement selections may also vary in real time, which provides flexibility of the model solution and the model 's uses. PlantStar passes the measurements that are available at the plant, to the dynamic observer model through a "plugin" module, which has been developed to incorporate the observer model and utilise the PlantStar control platform. In an on-line situation, the model will track the plant's behaviour and continuously update its position in real-time to ensure it follows the plant closely. This model would then be able to run simulations of the plant in parallel and could be used as a training facility for new operators, while in a real-time situation it could provide estimates of unmeasurable variables throughout the plant. An example of some of these variables are the flotation rate constants of minerals throughout the plant, which can be estimated in real time by the extended Kalman filter. The model could also be used to predict future plant conditions based on the current plant state , allowing for case scenarios to be performed without affecting the actual plant's performance. Once the dynamic observer model and "plugin" module were completed, case scenario simulations were performed using a measured data set from the plant as a starting point because real-time data were unavailable as the model was developed off-site . / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2004.

Page generated in 0.0915 seconds