Spelling suggestions: "subject:"realtime"" "subject:"mealtime""
271 |
CORBA in the aspect of replicated distributed real-time databasesMilton, Robert January 2002 (has links)
A distributed real-time database (DRTDB) is a database distributed over a network on several nodes and where the transactions are associated with deadlines. The issues of concern in this kind of database are data consistency and the ability to meet deadlines. In addition, there is the possibility that the nodes, on which the database is distributed, are heterogeneous. This means that the nodes may be built on different platforms and written in different languages. This makes the integration of these nodes difficult, since data types may be represented differently on different nodes. The common object request broker architecture (CORBA), defined by the Object Management Group (OMG), is a distributed object computing (DOC) middleware created to overcome problems with heterogeneous sites. The project described in this paper aims to investigate the suitability of CORBA as a middleware in a DRTDB. Two extensions to CORBA, Fault-Tolerance CORBA (FT-CORBA) and Real-Time CORBA (RT-CORBA) is of particular interest since the combination of these extensions provides the features for object replication and end-to-end predictability, respectively. The project focuses on the ability of RT-CORBA meeting hard deadlines and FT-CORBA maintaining replica consistency by using replication with eventual consistency. The investigation of the combination of RT-CORBA and FT-CORBA results in two proposed architectures that meet real-time requirements and provides replica consistency with CORBA as the middleware in a DRTDB.
|
272 |
Real Time Evolution (RTE) for on-line optimisation of continuous and semi-continuous chemical processesSequeira, Sebastián Eloy 15 July 2003 (has links)
En general, el control de procesos es muy eficiente cuando el punto de operación deseado ha sido determinado a priori y el sistema tiene capacidad suficiente para responder a las perturbaciones. Mientras el control de procesos es requerido a fin de regular algunas variables de proceso, la aplicación de tal técnica puede no ser apropiada para todas las variables significativas. En algunos casos, el punto optimo de operación cambia debido al efecto combinado de perturbaciones internas y externas por lo que un sistema de control prefijado puede no responder adecuadamente a los cambios. Cuando ciertas condiciones son satisfechas, la optimización en-línea surge como una alternativa adecuada para ajustarse a ese optimo cambiante.A fin de "perseguir" este optimo móvil, la optimización en-línea resuelve en forma periódica problemas de optimización, usando datos que vienen directamente de la planta y un modelo el cual es actualizado continuamente. La aplicación mas frecuente de la optimización en-línea corresponde a la categoría de procesos continuos. Esto se debe principalmente a que los modelos de estado estacionario son mas simples y fáciles de desarrollar y validar, además de que los procesos continuos tienen normalmente asociado elevada producción y por ende, pequeñas mejoras en la eficiencia del proceso se traducen en importantes ganancias. Sin embargo, aunque el uso de modelos al estado estacionario simplifica enormemente las tareas de modelización, hace emerger ciertos aspectos ligados a la validez de la hipótesis de un estado estacionario.Comenzaron a surgir varias aplicaciones a gran escala de la optimización en-línea, pero, si bien varios vendedores ofrecen productos y servicios en este área, la mayoría de las aplicaciones industriales abordan problemas de control avanzado, dejando a la optimización en un segundo plano. Los industriales han reportado que después de cuatro décadas ha tenido lugar una mejora progresiva en la metodología llevada a cabo en la optimización en-línea, pero que siguen estando presente los puntos débiles originales. Tales aspectos están directamente relacionados con la detección del estado estacionario (o las frecuencias de las perturbaciones) y la optimización en si misma.Los objetivos de la presente tesis están dirigidos a solventar parcialmente tales puntos débiles de la metodología actual. Como resultado, se propone una estrategia alternativa que saca ventaja de las mediciones y busca una mejora continua en lugar de una optimización formal. Se muestra que tal estrategia resulta muy efectiva y puede no solo ser aplicada para la optimización de puntos de consigna, pero también para tomar (en-línea) las decisiones discretas necesarias en procesos que presentan degradación (aspecto normalmente resuelto usando programación matemática).La estructura de la tesis es como sigue. El primer capitulo explica las principales motivaciones y objetivos del trabajo, mientras que el capitulo 2 consiste en una revisión bibliográfica que abarca, hasta cierto punto, los tópicos y funcionalidades mas importantes asociados a la optimización en-línea. Luego, los capítulos 3 y 4 presentan la estrategia propuesta a través de dos metodologías para la optimización en-línea, lo cual es la contribución mas importante de la tesis. El primero, (capitulo 3) se centra en la persecución de un optimo que se mueve por el efecto combinado de perturbaciones externas e internas. Por otro lado, en el capitulo 4 se explica una metodología paralela, concebida para procesos que presentan desempeño decreciente con el tiempo y requieren decisiones discretas en relación a acciones de mantenimiento. Ambos capítulos incluyen una primera parte, mas bien teórica, y una segunda parte dedicada a la validación usando casos de referencia. Luego, el capitulo 5 describe la aplicación de tales metodología sobre dos escenarios industriales, con la intención de complementar los resultados obtenidos sobre los casos académicos. Posteriormente, el capitulo 6 aborda dos problemas asociados a la implementación: la influencia de los parámetros ajustables y la arquitectura del software usada. Finalmente, el capitulo 7 resume las principales conclusiones y observaciones de la tesis. / In general, process control is very effective when the desired operation point has been determined from prior analysis and the control system has sufficient time to respond to disturbances. While process control is required for regulating some process variables, the application of these methods may be not appropriate for all important variables. In some situations, the best operating conditions change because of the combined effect of internal and external disturbances, and a fixed control design may not respond properly to these changes. When certain conditions are met, on-line optimisation becomes a suitable choice for tracking the moving optimum.In order to "pursue" that moving optimum, on-line optimisation solves periodically optimisation problems using data coming directly form the plant and a continuously updated model. The most common use of on-line optimisation corresponds to the continuous processes category. This is mainly owed to that steady state models are simpler and easier to develop and validate, besides that continuous processes have commonly high production rates, thus small relative improvements in the process efficiency originates significant economic earnings. Nevertheless, although the use of steady state models greatly simplifies the modelling task, it raises other issues associated with the validity of the steady state assumption. Large-scale applications of on-line optimisation started to spread, however, even when several vendors offer products and services in the area, most of the application address advanced control issues while on-line optimisation is released to a second plane. Industry practitioners have reported that after four decades there has been a progressive improvement in the on-line optimisation methodology, but the same initial weakness or more generally speaking some common causes of poor performance still remain. These issues are directly related with the steady state detection (or disturbance frequency) and the optimisation itself.The objectives of this thesis work are then directed to overcome at least partially the weak points of the current approach. The result is the proposal of an alternative strategy that takes fully advantage of the on-line measurements and looks for periodical improvement rather than a formal optimisation. It is shown how the proposed approach results very efficient and can be applied not only for set-point on-line optimisation but also for taking the on-line decision required in processes that presents decaying performance (aspect typically solved of-line via mathematical programming). The thesis is structured as follows. The first chapter explains the main motivations and objectives of the work, while chapter 2 consists in a literature review that addresses, to some extension, the most significant issues around the on-line optimisation functionality. After that, chapter 3 and chapter 4 introduce two methodologies that use the proposed strategy for on-line optimisation, which is the main thesis contribution. The first one (in chapter 3) focuses in tracking fast moving optima, which is caused mainly by the combined effect of external and internal disturbances. On the other hand, a parallel methodology is explained in 4, conceived for processes that present decaying performance and that require discrete decision related to maintenance actions. Both chapters include a first part, rather theoretical, and a second part devoted to the validation over typical benchmarks. Then, chapter 5 describes the application of such methodologies over two existing industrial scenarios, in order to complement the results obtained using the benchmarks. After that, chapter 6 addresses two issues related to the implementation aspects: the influence of the adjustable parameters of the proposed procedure and the software architectures used. Finally, chapter 7 draws conclusions and main observations.
|
273 |
An integrated approach to real-time multisensory inspection with an application to food processingDing, Yuhua 26 November 2003 (has links)
Real-time inspection based on machine vision technologies is being widely used in quality control and cost reduction in a variety of application domains. The high demands on the inspection performance and low cost requirements make the algorithm design a challenging task that requires new and innovative methodologies in image processing and fusion. In this research, an integrated approach that combines novel image processing and fusion techniques is proposed for the efficient design of accurate and real-time machine vision-based inspection algorithms with an application to the food processing problem.
Firstly, a general methodology is introduced for effective detection of defects and foreign objects that possess certain spectral and shape features. The factors that affect performance metrics are analyzed, and a recursive segmentation and classification scheme is proposed in order to improve the segmentation accuracy. The developed methodology is applied to real-time fan bone detection in deboned poultry meat with a detection rate of 93% and a false alarm rate of 7% from a lab-scale testing on 280 samples.
Secondly, a novel snake-based algorithm is developed for the segmentation of vector-valued images. The snakes are driven by the weighted sum of the optimal forces derived from corresponding energy functionals in each image, where the weights are determined based on a novel metric that measures both local contrasts and noise powers in individual sensor images. This algorithm is effective in improving the segmentation accuracy when imagery from multiple sensors is available to the inspection system. The effectiveness of the developed algorithm is verified using (i) synthesized images (ii) real medical and aerial images and (iii) color and x-ray chicken breast images. The results further confirmed that the algorithm yields higher segmentation accuracy than monosensory methods and is able to accommodate a certain amount of registration error. This feature-level image fusion technique can be combined with pixel- and decision- level techniques to improve the overall inspection system performance.
|
274 |
Effect Of Some Software Design Patterns On Real Time Software PerformanceAyata, Mesut 01 June 2010 (has links) (PDF)
In this thesis, effects of some software design patterns on real time software performance will be investigated. In real time systems, performance requirements are critical. Real time system developers usually use functional languages to meet
the requirements. Using an object oriented language may be expected to reduce performance. However, if suitable software design patterns are applied carefully, the reduction in performance can be avoided. In this thesis, appropriate real time software performance metrics are selected and used to measure the performance of real time software systems.
|
275 |
Component Decomposition of Distributed Real-Time SystemsBrohede, Marcus January 2000 (has links)
<p>Development of distributed real-time applications, in contrast to best effort applications, traditionally have been a slow process due to the lack of available standards, and the fact that no commercial off the shelf (COTS) distributed object computing (DOC) middleware supporting real-time requirements have been available to use, in order to speed up the development process without sacrificing any quality.</p><p>Standards and DOC middlewares are now emerging that are addressing key requirements of real-time systems, predictability and efficiency, and therefore, new possibilities such as component decomposition of real-time systems arises.</p><p>A number of component decomposed architectures of the distributed active real-time database system DeeDS is described and discussed, along with a discussion on the most suitable DOC middleware. DeeDS is suitable for this project since it supports hard real-time requirements and is distributed. The DOC middlewares that are addressed in this project are OMG's Real-Time CORBA, Sun's Enterprise JavaBeans, and Microsoft's COM/DCOM. The discussion to determine the most suitable DOC middleware focuses on real-time requirements, platform support, and whether implementations of these middlewares are available.</p>
|
276 |
CORBA in the aspect of replicated distributed real-time databasesMilton, Robert January 2002 (has links)
<p>A distributed real-time database (DRTDB) is a database distributed over a network on several nodes and where the transactions are associated with deadlines. The issues of concern in this kind of database are data consistency and the ability to meet deadlines. In addition, there is the possibility that the nodes, on which the database is distributed, are heterogeneous. This means that the nodes may be built on different platforms and written in different languages. This makes the integration of these nodes difficult, since data types may be represented differently on different nodes. The common object request broker architecture (CORBA), defined by the Object Management Group (OMG), is a distributed object computing (DOC) middleware created to overcome problems with heterogeneous sites.</p><p>The project described in this paper aims to investigate the suitability of CORBA as a middleware in a DRTDB. Two extensions to CORBA, Fault-Tolerance CORBA (FT-CORBA) and Real-Time CORBA (RT-CORBA) is of particular interest since the combination of these extensions provides the features for object replication and end-to-end predictability, respectively. The project focuses on the ability of RT-CORBA meeting hard deadlines and FT-CORBA maintaining replica consistency by using replication with eventual consistency. The investigation of the combination of RT-CORBA and FT-CORBA results in two proposed architectures that meet real-time requirements and provides replica consistency with CORBA as the middleware in a DRTDB.</p>
|
277 |
Global synchronization of asynchronous computing systemsBarnes, Richard Neil. January 2001 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
278 |
Networking infrastructure and data management for large-scale cyber-physical systemsHan, Song, doctor of computer sciences 25 February 2013 (has links)
A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system’s computational and physical elements. A large-scale CPS usually
consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this thesis, we first present a TDMA-based low-power and secure real-time wireless protocol. This
protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. We then describe the network management techniques designed for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different
environments for performance measurement. We also present a light-weighted and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer and a constrained application protocol layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms. At the end of this thesis, we present a semi-autonomous robotic system called cyberphysical avatar. The cyberphysical avatar is built based on our proposed network infrastructure and data management techniques. By integrating recent advance in body-compliant control in robotics, and neuroevolution in machine learning, the cyberphysical avatar can adjust to an unstructured environment and perform physical tasks subject to critical timing constraints while under human supervision. / text
|
279 |
Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time SystemsAysan, Hüseyin January 2012 (has links)
Ubiquitous deployment of embedded systems is having a substantial impact on our society, since they interact with our lives in many critical real-time applications. Typically, embedded systems used in safety or mission critical applications (e.g., aerospace, avionics, automotive or nuclear domains) work in harsh environments where they are exposed to frequent transient faults such as power supply jitter, network noise and radiation. They are also susceptible to errors originating from design and production faults. Hence, they have the design objective to maintain the properties of timeliness and functional correctness even under error occurrences. Fault-tolerance plays a crucial role towards achieving dependability, and the fundamental requirement for the design of effective and efficient fault-tolerance mechanisms is a realistic and applicable model of potential faults and their manifestations. An important factor to be considered in this context is the random nature of faults and errors, which, if addressed in the timing analysis by assuming a rigid worst-case occurrence scenario, may lead to inaccurate results. It is also important that the power, weight, space and cost constraints of embedded systems are addressed by efficiently using the available resources for fault-tolerance. This thesis presents a framework for designing predictably dependable embedded real-time systems by jointly addressing the timeliness and the reliability properties. It proposes a spectrum of fault-tolerance strategies particularly targeting embedded real-time systems. Efficient resource usage is attained by considering the diverse criticality levels of the systems' building blocks. The fault-tolerance strategies are complemented with the proposed probabilistic schedulability analysis techniques, which are based on a comprehensive stochastic fault and error model.
|
280 |
Reliability for Hard Real-time Communication in Packet-switched NetworksGanjalizadeh, Milad January 2014 (has links)
Nowadays, different companies use Ethernet for different industrial applications. Industrial Ethernet has some specific requirements due to its specific applications and environmental conditions which is the reason that makes it different than corporate LANs. Real-time guarantees, which require precise synchronization between all communication devices, as well as reliability are the keys in performance evaluation of different methods [1]. High bandwidth, high availability, reduced cost, support for open infrastructure as well as deterministic architecture make packet-switched networks suitable for a variety of different industrial distributed hard real-time applications. Although research on guaranteeing timing requirements in packet-switched networks has been done, communication reliability is still an open problem for hard real-time applications. In this thesis report, a framework for enhancing the reliability in multihop packet-switched networks is presented. Moreover, a novel admission control mechanism using a real-time analysis is suggested to provide deadline guarantees for hard real-time traffic. A generic and flexible simulator has been implemented for the purpose of this research study to measure different defined performance metrics. This simulator can also be used for future research due to its flexibility. The performance evaluation of the proposed solution shows a possible enhancement of the message error rate by several orders of magnitude, while the decrease in network utilization stays at a reasonable level.
|
Page generated in 0.0585 seconds