• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1744
  • 650
  • 250
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3747
  • 3747
  • 723
  • 719
  • 600
  • 543
  • 542
  • 474
  • 472
  • 427
  • 398
  • 378
  • 347
  • 332
  • 268
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Emprego da reação em cadeia pela polimerase em tempo real para o controle de eficiência de bacterinas anti-leptospirose / Employment of real time polymerase chain reaction to control the efficiency of leptospirosis bacterins

Cristina Corsi Dib 31 August 2011 (has links)
A estirpe Fromm de Leptospira interrogans sorovar Kennewicki foi utilizada para produção de uma bacterina experimental anti-leptospirose. A extração do RNA total utilizado para transcrição reversa e quantificação dos antígenos LigA e LipL32 por PCR em Tempo Real, foi efetuada a partir de alíquotas colhidas das diluições da bacterina antes da sua inativação, as quais foram armazenadas à temperatura de -80ºC. O volume restante da bacterina foi inativado em banho-maria à 56ºC e mantido à temperatura de -20ºC para avaliação da sua potência em hamsters bem como da detecção e quantificação dos antígenos LigA e LipL32 em ensaios de ELISA Indireto e ELISA Sanduíche Indireto. Os resultados do ensaio de potência em hamsters demonstraram que a bacterina foi aprovada de acordo com as exigências dos padrões internacionais de qualidade até a diluição 1/6400, protegendo os hamters contra a infecção letal frente ao desafio com a diluição 10-6 (100 doses infectantes 50%/ 0,2mL). Os resultados das reações de Real Time PCR detectaram 3,2 x 103 e 2,3 x 101 cópias do mRNA que codifica a proteína LigA, na bacterina pura e diluída a 1:200, respectivamente. Apenas oito cópias do mRNA que codifica a proteína LipL32 foram detectadas na amostra de bacterina pura. Os ensaios com ELISA Indireto não detectaram a proteína LigA na amostra de bacterina inativada, mas demonstraram a detecção da proteína LipL32 até a diluição 1/1600 da bacterina. Os ensaios de ELISA Sanduíche Indireto apresentaram reações cruzadas nas placas controle, e, portanto seus resultados não puderam ser considerados nas análises. Os resultados da real time PCR não puderam ser correlacionados com o teste de potência em hamsters, mas os ensaios de ELISA Indireto para a proteína LipL32 demonstraram resultados condizentes com os apresentados pelo teste de potência em hamsters oferecendo uma possível alternativa in vitro para avaliação de potência de bacterinas anti-leptospirose. / Leptospira interrogans serovar Kennewicki strain Fromm was used for the production of a experimental leptospirosis bacterin. The extraction of total RNA used for reverse transcription and quantification of the antigens LigA and LipL32 for Real Time PCR was performed from the aliquots harvested of bacterin dilutions before inactivation that were separated and maintained at -80ºC. The remaining volume of bacterin was inativated at 56ºC and maintained at -20ºC for the evaluation bacterin potency in hamsters and detection and quantification of LigA and LipL32 antigens by Indirect ELISA assay and Indirect Sandwich ELISA. The results of potency assay in hamsters demonstrated that the bacterin was approved by the international patterns of quality until dilution 1/6400, protecting the hamters against lethal infection challenge by the dilution 10-6 (100 infectious doses 50%/0,2 mL). The results of Real Time PCR detected 3,2 x 103 e 2,3 x 101 copies of mRNA that encodes the LigA protein, in samples of pure bacterin and diluted 1:200, respectively. Few eight copies of mRNA that encodes LipL32 protein were detected in pure bacterin samples. Indirect ELISA assays not detected LigA protein in inactivated bacterin samples, but demonstrated LipL32 protein detection until dilution 1:1600 of bacterin. Indirect Sandwich ELISA presented cross-reaction in control plates, so the results cannot be considerated in the analysis. The results of real time PCR cannot be correlated with the potency assay in hamsters but Indirect ELISA assay for protein LipL32 demonstrated that the results were suitable with the results presented by the potency assay in hamsters offering a possible in vitro alternative for the evaluation of leptospirosis bacterins potency.
182

Hybrid adaptive controller for resource allocation of real-rate multimedia applications

Vahia, Varin 01 April 2003 (has links)
Multimedia applications such as video streaming and Voice over IP are becoming common today with the tremendous growth of the Internet. General purpose operating systems thus are required to support these applications. These multimedia applications have some timing constraints that need to be satisfied for good quality. For example, video streaming applications require that each video frame be decoded in time to be displayed every 33.3 milliseconds. In order to satisfy these timing requirements, general purpose operating systems need to have fine-grained scheduling. Current general purpose operating systems unfortunately are designed to maximize throughput to serve traditional data-oriented applications and have coarse-grained scheduling and timers. Time Sensitive Linux (TSL), designed by Goel, et al., solves this problem with fine-grained timers and schedulers. The scheduler for TSL is implemented at a very low level. The controller that implements the algorithm for resource allocation is implemented at a higher level. This controller can easily be modified to implement new control algorithms. Successful implementation of resource allocation to satisfy timing constraints of multimedia applications requires two problems to be addressed. First, the resources required by the application to satisfy the timing constraints should not exceed the total available resources in the system. Second, the controller must adapt to changing needs of the applications and allocate enough resources to satisfy the timing constraints of each application over time. The first problem has been addressed elsewhere using intelligent data dropping with TSL. We focus on the second problem in this thesis. We design a proportion-period controller in this thesis for allocating CPU to multimedia video applications with timing constraints. The challenges for the controller design include the coarse granularity of the time-stamp markings of the video frames, the unpredictable decoding completion times of the frames, the large variations in the decoding times of the frames, and the limit of the control actuation to positive values. We set up the problem in a state space. We design a predictive estimating controller to allocate the proportion of the CPU to a thread when its long term error is small. When the decoding process is running behind by more than a certain threshold, we switch to a different controller to drive the error back to a small value. This controller is the solution to a dynamic optimization LQR tracking problem. / Graduation date: 2003
183

CORBA in the aspect of replicated distributed real-time databases

Milton, Robert January 2002 (has links)
A distributed real-time database (DRTDB) is a database distributed over a network on several nodes and where the transactions are associated with deadlines. The issues of concern in this kind of database are data consistency and the ability to meet deadlines. In addition, there is the possibility that the nodes, on which the database is distributed, are heterogeneous. This means that the nodes may be built on different platforms and written in different languages. This makes the integration of these nodes difficult, since data types may be represented differently on different nodes. The common object request broker architecture (CORBA), defined by the Object Management Group (OMG), is a distributed object computing (DOC) middleware created to overcome problems with heterogeneous sites. The project described in this paper aims to investigate the suitability of CORBA as a middleware in a DRTDB. Two extensions to CORBA, Fault-Tolerance CORBA (FT-CORBA) and Real-Time CORBA (RT-CORBA) is of particular interest since the combination of these extensions provides the features for object replication and end-to-end predictability, respectively. The project focuses on the ability of RT-CORBA meeting hard deadlines and FT-CORBA maintaining replica consistency by using replication with eventual consistency. The investigation of the combination of RT-CORBA and FT-CORBA results in two proposed architectures that meet real-time requirements and provides replica consistency with CORBA as the middleware in a DRTDB.
184

Real Time Evolution (RTE) for on-line optimisation of continuous and semi-continuous chemical processes

Sequeira, Sebastián Eloy 15 July 2003 (has links)
En general, el control de procesos es muy eficiente cuando el punto de operación deseado ha sido determinado a priori y el sistema tiene capacidad suficiente para responder a las perturbaciones. Mientras el control de procesos es requerido a fin de regular algunas variables de proceso, la aplicación de tal técnica puede no ser apropiada para todas las variables significativas. En algunos casos, el punto optimo de operación cambia debido al efecto combinado de perturbaciones internas y externas por lo que un sistema de control prefijado puede no responder adecuadamente a los cambios. Cuando ciertas condiciones son satisfechas, la optimización en-línea surge como una alternativa adecuada para ajustarse a ese optimo cambiante.A fin de "perseguir" este optimo móvil, la optimización en-línea resuelve en forma periódica problemas de optimización, usando datos que vienen directamente de la planta y un modelo el cual es actualizado continuamente. La aplicación mas frecuente de la optimización en-línea corresponde a la categoría de procesos continuos. Esto se debe principalmente a que los modelos de estado estacionario son mas simples y fáciles de desarrollar y validar, además de que los procesos continuos tienen normalmente asociado elevada producción y por ende, pequeñas mejoras en la eficiencia del proceso se traducen en importantes ganancias. Sin embargo, aunque el uso de modelos al estado estacionario simplifica enormemente las tareas de modelización, hace emerger ciertos aspectos ligados a la validez de la hipótesis de un estado estacionario.Comenzaron a surgir varias aplicaciones a gran escala de la optimización en-línea, pero, si bien varios vendedores ofrecen productos y servicios en este área, la mayoría de las aplicaciones industriales abordan problemas de control avanzado, dejando a la optimización en un segundo plano. Los industriales han reportado que después de cuatro décadas ha tenido lugar una mejora progresiva en la metodología llevada a cabo en la optimización en-línea, pero que siguen estando presente los puntos débiles originales. Tales aspectos están directamente relacionados con la detección del estado estacionario (o las frecuencias de las perturbaciones) y la optimización en si misma.Los objetivos de la presente tesis están dirigidos a solventar parcialmente tales puntos débiles de la metodología actual. Como resultado, se propone una estrategia alternativa que saca ventaja de las mediciones y busca una mejora continua en lugar de una optimización formal. Se muestra que tal estrategia resulta muy efectiva y puede no solo ser aplicada para la optimización de puntos de consigna, pero también para tomar (en-línea) las decisiones discretas necesarias en procesos que presentan degradación (aspecto normalmente resuelto usando programación matemática).La estructura de la tesis es como sigue. El primer capitulo explica las principales motivaciones y objetivos del trabajo, mientras que el capitulo 2 consiste en una revisión bibliográfica que abarca, hasta cierto punto, los tópicos y funcionalidades mas importantes asociados a la optimización en-línea. Luego, los capítulos 3 y 4 presentan la estrategia propuesta a través de dos metodologías para la optimización en-línea, lo cual es la contribución mas importante de la tesis. El primero, (capitulo 3) se centra en la persecución de un optimo que se mueve por el efecto combinado de perturbaciones externas e internas. Por otro lado, en el capitulo 4 se explica una metodología paralela, concebida para procesos que presentan desempeño decreciente con el tiempo y requieren decisiones discretas en relación a acciones de mantenimiento. Ambos capítulos incluyen una primera parte, mas bien teórica, y una segunda parte dedicada a la validación usando casos de referencia. Luego, el capitulo 5 describe la aplicación de tales metodología sobre dos escenarios industriales, con la intención de complementar los resultados obtenidos sobre los casos académicos. Posteriormente, el capitulo 6 aborda dos problemas asociados a la implementación: la influencia de los parámetros ajustables y la arquitectura del software usada. Finalmente, el capitulo 7 resume las principales conclusiones y observaciones de la tesis. / In general, process control is very effective when the desired operation point has been determined from prior analysis and the control system has sufficient time to respond to disturbances. While process control is required for regulating some process variables, the application of these methods may be not appropriate for all important variables. In some situations, the best operating conditions change because of the combined effect of internal and external disturbances, and a fixed control design may not respond properly to these changes. When certain conditions are met, on-line optimisation becomes a suitable choice for tracking the moving optimum.In order to "pursue" that moving optimum, on-line optimisation solves periodically optimisation problems using data coming directly form the plant and a continuously updated model. The most common use of on-line optimisation corresponds to the continuous processes category. This is mainly owed to that steady state models are simpler and easier to develop and validate, besides that continuous processes have commonly high production rates, thus small relative improvements in the process efficiency originates significant economic earnings. Nevertheless, although the use of steady state models greatly simplifies the modelling task, it raises other issues associated with the validity of the steady state assumption. Large-scale applications of on-line optimisation started to spread, however, even when several vendors offer products and services in the area, most of the application address advanced control issues while on-line optimisation is released to a second plane. Industry practitioners have reported that after four decades there has been a progressive improvement in the on-line optimisation methodology, but the same initial weakness or more generally speaking some common causes of poor performance still remain. These issues are directly related with the steady state detection (or disturbance frequency) and the optimisation itself.The objectives of this thesis work are then directed to overcome at least partially the weak points of the current approach. The result is the proposal of an alternative strategy that takes fully advantage of the on-line measurements and looks for periodical improvement rather than a formal optimisation. It is shown how the proposed approach results very efficient and can be applied not only for set-point on-line optimisation but also for taking the on-line decision required in processes that presents decaying performance (aspect typically solved of-line via mathematical programming). The thesis is structured as follows. The first chapter explains the main motivations and objectives of the work, while chapter 2 consists in a literature review that addresses, to some extension, the most significant issues around the on-line optimisation functionality. After that, chapter 3 and chapter 4 introduce two methodologies that use the proposed strategy for on-line optimisation, which is the main thesis contribution. The first one (in chapter 3) focuses in tracking fast moving optima, which is caused mainly by the combined effect of external and internal disturbances. On the other hand, a parallel methodology is explained in 4, conceived for processes that present decaying performance and that require discrete decision related to maintenance actions. Both chapters include a first part, rather theoretical, and a second part devoted to the validation over typical benchmarks. Then, chapter 5 describes the application of such methodologies over two existing industrial scenarios, in order to complement the results obtained using the benchmarks. After that, chapter 6 addresses two issues related to the implementation aspects: the influence of the adjustable parameters of the proposed procedure and the software architectures used. Finally, chapter 7 draws conclusions and main observations.
185

An integrated approach to real-time multisensory inspection with an application to food processing

Ding, Yuhua 26 November 2003 (has links)
Real-time inspection based on machine vision technologies is being widely used in quality control and cost reduction in a variety of application domains. The high demands on the inspection performance and low cost requirements make the algorithm design a challenging task that requires new and innovative methodologies in image processing and fusion. In this research, an integrated approach that combines novel image processing and fusion techniques is proposed for the efficient design of accurate and real-time machine vision-based inspection algorithms with an application to the food processing problem. Firstly, a general methodology is introduced for effective detection of defects and foreign objects that possess certain spectral and shape features. The factors that affect performance metrics are analyzed, and a recursive segmentation and classification scheme is proposed in order to improve the segmentation accuracy. The developed methodology is applied to real-time fan bone detection in deboned poultry meat with a detection rate of 93% and a false alarm rate of 7% from a lab-scale testing on 280 samples. Secondly, a novel snake-based algorithm is developed for the segmentation of vector-valued images. The snakes are driven by the weighted sum of the optimal forces derived from corresponding energy functionals in each image, where the weights are determined based on a novel metric that measures both local contrasts and noise powers in individual sensor images. This algorithm is effective in improving the segmentation accuracy when imagery from multiple sensors is available to the inspection system. The effectiveness of the developed algorithm is verified using (i) synthesized images (ii) real medical and aerial images and (iii) color and x-ray chicken breast images. The results further confirmed that the algorithm yields higher segmentation accuracy than monosensory methods and is able to accommodate a certain amount of registration error. This feature-level image fusion technique can be combined with pixel- and decision- level techniques to improve the overall inspection system performance.
186

Effect Of Some Software Design Patterns On Real Time Software Performance

Ayata, Mesut 01 June 2010 (has links) (PDF)
In this thesis, effects of some software design patterns on real time software performance will be investigated. In real time systems, performance requirements are critical. Real time system developers usually use functional languages to meet the requirements. Using an object oriented language may be expected to reduce performance. However, if suitable software design patterns are applied carefully, the reduction in performance can be avoided. In this thesis, appropriate real time software performance metrics are selected and used to measure the performance of real time software systems.
187

CORBA in the aspect of replicated distributed real-time databases

Milton, Robert January 2002 (has links)
<p>A distributed real-time database (DRTDB) is a database distributed over a network on several nodes and where the transactions are associated with deadlines. The issues of concern in this kind of database are data consistency and the ability to meet deadlines. In addition, there is the possibility that the nodes, on which the database is distributed, are heterogeneous. This means that the nodes may be built on different platforms and written in different languages. This makes the integration of these nodes difficult, since data types may be represented differently on different nodes. The common object request broker architecture (CORBA), defined by the Object Management Group (OMG), is a distributed object computing (DOC) middleware created to overcome problems with heterogeneous sites.</p><p>The project described in this paper aims to investigate the suitability of CORBA as a middleware in a DRTDB. Two extensions to CORBA, Fault-Tolerance CORBA (FT-CORBA) and Real-Time CORBA (RT-CORBA) is of particular interest since the combination of these extensions provides the features for object replication and end-to-end predictability, respectively. The project focuses on the ability of RT-CORBA meeting hard deadlines and FT-CORBA maintaining replica consistency by using replication with eventual consistency. The investigation of the combination of RT-CORBA and FT-CORBA results in two proposed architectures that meet real-time requirements and provides replica consistency with CORBA as the middleware in a DRTDB.</p>
188

Networking infrastructure and data management for large-scale cyber-physical systems

Han, Song, doctor of computer sciences 25 February 2013 (has links)
A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system’s computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this thesis, we first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. We then describe the network management techniques designed for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. We also present a light-weighted and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer and a constrained application protocol layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms. At the end of this thesis, we present a semi-autonomous robotic system called cyberphysical avatar. The cyberphysical avatar is built based on our proposed network infrastructure and data management techniques. By integrating recent advance in body-compliant control in robotics, and neuroevolution in machine learning, the cyberphysical avatar can adjust to an unstructured environment and perform physical tasks subject to critical timing constraints while under human supervision. / text
189

Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time Systems

Aysan, Hüseyin January 2012 (has links)
Ubiquitous deployment of embedded systems is having a substantial impact on our society, since they interact with our lives in many critical real-time applications. Typically, embedded systems used in safety or mission critical applications (e.g., aerospace, avionics, automotive or nuclear domains) work in harsh environments where they are exposed to frequent transient faults such as power supply jitter, network noise and radiation. They are also susceptible to errors originating from design and production faults. Hence, they have the design objective to maintain the properties of timeliness and functional correctness even under error occurrences. Fault-tolerance plays a crucial role towards achieving dependability, and the fundamental requirement for the design of effective and efficient fault-tolerance mechanisms is a realistic and applicable model of potential faults and their manifestations. An important factor to be considered in this context is the random nature of faults and errors, which, if addressed in the timing analysis by assuming a rigid worst-case occurrence scenario, may lead to inaccurate results. It is also important that the power, weight, space and cost constraints of embedded systems are addressed by efficiently using the available resources for fault-tolerance. This thesis presents a framework for designing predictably dependable embedded real-time systems by jointly addressing the timeliness and the reliability properties. It proposes a spectrum of fault-tolerance strategies particularly targeting embedded real-time systems. Efficient resource usage is attained by considering the diverse criticality levels of the systems' building blocks. The fault-tolerance strategies are complemented with the proposed probabilistic schedulability analysis techniques, which are based on a comprehensive stochastic fault and error model.
190

Reliability for Hard Real-time Communication in Packet-switched Networks

Ganjalizadeh, Milad January 2014 (has links)
Nowadays, different companies use Ethernet for different industrial applications. Industrial Ethernet has some specific requirements due to its specific applications and environmental conditions which is the reason that makes it different than corporate LANs. Real-time guarantees, which require precise synchronization between all communication devices, as well as reliability are the keys in performance evaluation of different methods [1].  High bandwidth, high availability, reduced cost, support for open infrastructure as well as deterministic architecture make packet-switched networks suitable for a variety of different industrial distributed hard real-time applications. Although research on guaranteeing timing requirements in packet-switched networks has been done, communication reliability is still an open problem for hard real-time applications. In this thesis report, a framework for enhancing the reliability in multihop packet-switched networks is presented. Moreover, a novel admission control mechanism using a real-time analysis is suggested to provide deadline guarantees for hard real-time traffic. A generic and flexible simulator has been implemented for the purpose of this research study to measure different defined performance metrics. This simulator can also be used for future research due to its flexibility. The performance evaluation of the proposed solution shows a possible enhancement of the message error rate by several orders of magnitude, while the decrease in network utilization stays at a reasonable level.

Page generated in 0.1913 seconds