• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1980
  • 654
  • 307
  • 237
  • 142
  • 77
  • 55
  • 43
  • 29
  • 22
  • 19
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 4130
  • 4130
  • 823
  • 812
  • 638
  • 630
  • 555
  • 547
  • 521
  • 434
  • 430
  • 427
  • 347
  • 332
  • 292
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Planificación de Diferentes Clases de Aplicaciones en Entornos No Dedicados Considerando Procesadores Multicore

García Gutiérrez, José Ramón 19 July 2010 (has links)
A día de hoy es prácticamente imposible encontrar una gran institución que no disponga de un parque de ordenadores considerable, debido al alto nivel de informatización de la sociedad actual. El enorme potencial que representan estos miles de ordenadores atrae poderosamente la atención en los ámbitos científicos e industriales, generando opciones viables para su aprovechamiento. Las universidades, instituciones que históricamente se han mantenido a la vanguardia en la investigación e innovación científica, representan un caso especialmente bien posicionado a la hora de generar tanto los recursos informáticos como la necesidad de su uso.El poder de cómputo existente en los laboratorios y aulas de estudio universitarias, agrupaciones naturales de recursos informáticos, crea grandes oportunidades para la computación paralela, animándonos a buscar opciones viables para su aprovechamiento. Como resultado de este interés, nuestro grupo ha creado un entorno de planificación, enfocado hacia los clusters no dedicados. La constante y rápida evolución de los componentes, tanto a nivel de la arquitectura de la CPU como del sistema operativo, así como de las aplicaciones ejecutadas, hace que tengamos que adaptar nuestras propuestas. Nuestra propuesta consiste en crear una Máquina Virtual con doble funcionalidad, ejecutar la carga local de usuario y aprovechar los períodos de inactividad de nodos a efectos de poder usarlos para ejecutar carga paralela. Tanto el tipo de las aplicaciones como las características del hardware del escenario objetivo, y en el momento actual ambas han evolucionado. Nuevos tipos de aplicaciones paralelas con requerimientos periódicos de CPU son cada día más comunes en el mundo científico e industrial. Este tipo de aplicaciones pueden requerir un tiempo de retorno (turnaround) específico o una Calidad de Servicio (Quality of Service, QoS) determinada. Para nuestro caso particular, reviste especial importancia el conocimiento que poseemos de los usuarios locales, debido a que nuestro entorno está diseñado para trabajar en clusters no dedicados. Un usuario local puede estar visualizando un vídeo almacenado en su ordenador, lo cual implica necesidades de CPU periódicas y un mayor uso de memoria. La aparición de nuevos tipos de aplicaciones, como vídeo bajo demanda ó realidad virtual, que se caracterizan por la necesidad de cumplir sus deadlines, presentando requerimientos periódicos de recursos. Este tipo de aplicaciones, donde la pérdida de deadlines no se considera un fallo severo, han sido denominadas en la literatura aplicaciones soft-real time (SRT) periódicas.Esta interesante evolución de las necesidades de los usuarios no es el único digno de atención. El crecimiento en la capacidad de cómputo de los procesadores en los últimos años se ha visto frenado a causa de las barreras físicas del espacio y la velocidad de las señales, obligando a los fabricantes de procesadores a explorar otras vías de crecimiento. Desde hace ya algún tiempo el paralelismo de las aplicaciones se ha convertido en una de las grandes apuestas. Hoy en día los procesadores de dos núcleos son la mínima configuración que encontraremos en un ordenador, y se prevé que el número de núcleos continuará creciendo en los próximos años.Los clusters no dedicados ofrecen un gran potencial de un uso, debido a que los recursos materiales ya están disponibles y el cálculo paralelo se realiza simultáneamente con el del usuario local. Imaginando el escenario actual en los clusters no dedicados, encontramos nuevas aplicaciones de escritorio y paralelas, así como plataformas hardware más potentes y complejas. En esta situación investigar el problema y realizar propuestas relacionadas con la planificación de los diferentes tipos de aplicaciones en clusters no dedicados, considerando las plataformas multicore, supone un nuevo reto a asumir por los investigadores y conforma el núcleo de este trabajo. / Today it is virtually impossible to find an institution that does not have a considerable amount of computers, due to the high level of computerization of society. The enormous potential of these large number of computers attract a lot of attention in science and industry, generating viable options for their use. The universities, institutions that historically have remained at the forefront of research and scientific innovation, represent a case particularly well positioned in generating both, computing resources and the need to use. The computational power present in laboratories and university study rooms, natural groupings of information resources, creating great opportunities for parallel computing, encouraging us to seek viable options for their use. As a result of this interest, our group has created a parallel scheduling environment, focused on non-dedicated clusters. The constant and fast evolution of the components, both at the architecture of the CPU and the operating system and applications executed, forces us to adapt our proposals. Our proposal is to create a Virtual Machine with dual functionality, run the local load user and take advantage of downtime for the purposes of nodes it can be used to run parallel load. At present both, applications and hardware specifications of the target scenario, have evolved. New types of parallel applications with periodic CPU requirements are becoming more common in science and industry. Such applications may require a return time (turnaround) or a specific QoS (Quality of Service). Since our framework is designed to work in non-dedicated clusters, having knowledge of the local users behavior is of particular importance. A local user may be viewing a video stored on your computer, which involves periodic CPU requirements and increased use of memory. The emerging new types of applications, such as video on demand or virtual reality are a fact. This new types of applications are characterized by the need to meet their deadlines, presenting periodic resource requirements. This type of application, where the loss of deadlines is not considered a severe failure, has been named in the literature uses soft-real time (SRT) at regular intervals. This exciting evolution of user needs is not the only one worthy of attention. The growth in computing power of processors in recent years has been hampered because of the physical barriers of space and speed of the signals, forcing chip makers to explore other avenues of growth. For some time the parallelism of the applications has become one of the biggest bets. Today's dual-core processors are the minimum configuration of any computer, and it is expected that the number of nuclei continue to grow in the coming years. The non-dedicated clusters offer great potential for use, because the computational resources are already available, and parallel computing is performed simultaneously with the local user. Figuring out the current scenario in the non-dedicated clusters, we find new desktop applications, parallel and more powerful and complex hardware platforms. In this situation, research lines related to the planning of the different types of applications in non dedicated clusters, considering multi-core platforms, is a new challenge to be assumed by researchers and constitute the core of this work.
142

IMPACT OF DIET COMPOSITION ON RUMEN BACTERIAL PHYLOGENETICS

2013 February 1900 (has links)
ABSTRACT Two experiments were conducted to determine the effects of various forage to concentrate ratios on the rumen microbial ecosystem and rumen fermentation parameters using culture-independent methods. In the first experiment, cattle were fed either a high concentrate (HC) or a high concentrate without forage (HCNF) diet. Comparison of rumen fermentation parameters between these two diets showed that duration of time spent below pH 5.2 and rumen osmolality were higher for HCNF. Calculations using Simpson’s index showed a greater diversity of dominant species for HCNF than in HC based on 16S rRNA PCR-DGGE. Real-time real-time PCR showed populations of Fibrobacter succinogenes (P=0.01) were lower in HCNF than HC diets. Ruminococcus spp., F. succinogenes and Selenomonas ruminantium were present at higher (P≤0.05) concentrations in solid than in liquid digesta in both diets. The second experiment compared cattle as they adapted from a strictly forage to a concentrate diet, after which they were subject to an acidotic challenge and a recovery period (Forage, Mixed Forage, High Grain, Acidosis and Recovery). A total of 153,621 high-quality bacterial sequences were obtained from biopsied rumen epithelium, and 407,373 sequences from the solid and liquid phases of rumen contents. Only 14 epithelial genera representing >1.0% of the epimural population differed (P ≤ 0.05) among dietary treatments. However, clustering showed a closer relation in bacterial profiles for the Forage and Mixed Forage diets as compared to the High Grain, Acidosis and Recovery diets. Several epithelial identified genera including Atopobium, Desulfocurvus, Fervidicola, Lactobacillus and Olsenella increased as a result of acidosis. However, any changes in bacterial populations during the acidosis challenge were not sustained during the recovery period. This indicates a high level of stability within the rumen epimural community. An epithelial core microbiome was determined which explained 21% of the enumerable rumen population across all treatment samples. Cluster analysis of the solid and liquid phase rumen bacterial showed that these populations differed (P ≤ 0.10) between forage and grain-based diets. Rumen core microbiome analysis found 32 OTU’s representing 10 distinct bacterial taxa in whole rumen contents for all dietary treatments. Heifers that developed clinical acidosis vs the subclinical acidosis showed increases in the genera Acetitomaculum, Lactobacillus, Prevotella, and Streptococcus. Variation in microbial taxa as an effect of both treatment and animal was evident in the solid and liquid fractions of the rumen digesta. However, impacts of a dietary treatment were transient and despite an acidotic challenge, rumen microbiota were able to recover within a week of perturbation. The bacterial populations in the rumen are highly diverse as indicated by DGGE analysis and showed clear distinction between not only dietary treatments, individual animals, but also between epithelial, liquid and solid associated populations on the same diet. Molecular techniques provide an increased understanding of the impact of dietary change on the nature of rumen bacterial populations and conclusions derived using these techniques may not match those previously derived using traditional laboratory culturing techniques.
143

A Link-Level Communication Analysis for Real-Time NoCs

Gholamian, Sina January 2012 (has links)
This thesis presents a link-level latency analysis for real-time network-on-chip interconnects that use priority-based wormhole switching. This analysis incorporates both direct and indirect interferences from other traffic flows, and it leverages pipelining and parallel transmission of data across the links. The resulting link-level analysis provides a tighter worst-case upper-bound than existing techniques, which we verify with our analysis and simulation experiments. Our experiments show that on average, link-level analysis reduces the worst-case latency by 28.8%, and improves the number of flows that are schedulable by 13.2% when compared to previous work.
144

Tracking a tennis ball using image processing techniques

Mao, Jinzi 30 August 2006 (has links)
In this thesis we explore several algorithms for automatic real-time tracking of a tennis ball. We first investigate the use of background subtraction with color/shape recognition for fast tracking of the tennis ball. We then compare our solution with a cascade of boosted Haar classifiers [68] in a simulated environment to estimate the accuracy and ideal processing speeds. The results show that background subtraction techniques were not only faster but also more accurate than Haar classifiers. Following these promising results, we extend the background subtraction and develop other three improved techniques. These techniques use more accurate background models, more reliable and stringent criteria. They allow us to track the tennis ball in a real tennis environment with cameras having higher resolutions and frame rates. <p>We tested our techniques with a large number of real tennis videos. In the indoors environment, We achieved a true positive rate of about 90%, a false alarm rate of less than 2%, and a tracking speed of about 20 fps. For the outdoors environment, the performance of our techniques is not as good as the indoors cases due to the complexity and instability of the outdoors environment. The problem can be solved by resetting our system such that the camera focuses mainly on the tennis ball. Therefore, the influence of the external factors is minimized.<p>Despite the existing limitations, our techniques are able to track a tennis ball with very high accuracy and fast speed which can not be achieved by most tracking techniques currently available. We are confident that the motion information generated from our techniques is reliable and accurate. Giving this promising result, we believe some real-world applications can be constructed.
145

Utilization-based delay guarantee techniques and their applications

Wang, Shengquan 15 May 2009 (has links)
Many real-time systems demand effective and efficient delay-guaranteed services to meet timing requirements of their applications. We note that a system provides a delay-guaranteed service if the system can ensure that each task will meet its predefined end-to-end deadline. Admission control plays a critical role in providing delayguaranteed services. The major function of admission control is to determine admissibility of a new task. A new task will be admitted into the system if the deadline of all existing tasks and the new task can be met. Admission control has to be efficient and efficient, meaning that a decision should be made quickly while admitting the maximum number of tasks. In this dissertation, we study a utilization-based admission control mechanism. Utilization-based admission control makes an admission decision based on a simple resource utilization test: A task will be admitted if the resource utilization is lower than a pre-derived safe resource utilization bound. The challenge of obtaining a safe resource utilization bound is how to perform delay analysis offline, which is the main focus of this dissertation. For this, we develop utilization-based delay guarantee techniques to render utilization-based admission control both efficient and effective, which is further confirmed with our data. We develop techniques for several systems that are of practical importance. We first consider wired networks with the Differentiated Services model, which is wellknown as its supporting scalable services in computer networks. We consider both cases of providing deterministic and statistical delay-guaranteed services in wired networks with the Differentiated Services model. We will then extend our work to wireless networks, which have become popular for both civilian and mission critical applications. The variable service capacity of a wireless link presents more of a challenge in providing delay-guaranteed services in wireless networks. Finally, we study ways to provide delayguaranteed services in component-based systems, which now serve as an important platform for developing a new generation of computer software. We show that with our utilization-based delay guarantee technique, component-based systems can provide efficient and effective delay-guaranteed services while maintaining such advantages as the reusability of components.
146

Energy Efficient Scheduling for Real-Time Systems

Gupta, Nikhil 2011 December 1900 (has links)
The goal of this dissertation is to extend the state of the art in real-time scheduling algorithms to achieve energy efficiency. Currently, Pfair scheduling is one of the few scheduling frameworks which can optimally schedule a periodic real-time taskset on a multiprocessor platform. Despite the theoretical optimality, there exist large concerns about efficiency and applicability of Pfair scheduling in practical situations. This dissertation studies and proposes solutions to such efficiency and applicability concerns. This dissertation also explores temperature aware energy management in the domain of real-time scheduling. The thesis of this dissertation is: the implementation efficiency of Pfair scheduling algorithms can be improved. Further, temperature awareness of a real-time system can be improved while considering variation of task execution times to reduce energy consumption. This thesis is established through research in a number of directions. First, we explore the applicability of Dynamic Voltage and Frequency Scaling (DVFS) feature in the underlying platform, within Pfair scheduled systems. We propose techniques to reduce energy consumption in Pfair scheduling by using DVFS. Next, we explore the problem of quantum size selection in Pfair scheduled system so that runtime overheads are minimized. We also propose a hardware design for a central Pfair scheduler core in a multiprocessor system to minimized the overheads and energy consumption of Pfair scheduling. Finally, we propose a temperature aware energy management scheme for tasks with varying execution times.
147

A Real-Time Address Trace Compressor for Embedded Microprocessors

Huang, Shyh-Ming 03 September 2003 (has links)
Address trace compression represents that the address data, which are generated from the instruction fetch stage of the microprocessor, can be retrieved for later observation and analysis. This real time trace compression hardware is the primary component of real-time trace system. In this paper, we present how to design and implement this real-time address trace compressor. Address trace compressor is allowed to perform accurate, successive trace collection in an unlimited length and can be used in various embedded microprocessors without influencing the operation of the microprocessors. Also, it has abundant reconfigurable parameters that can be used to develop a cost-effective trace system. The experiment results show that this compressor can reach a higher compression ratio of 1:100. Hence, by utilizing this real-time compression technique, the trace depths of new trace system can be 20 times more than these existing in-circuit emulators.
148

Software Design of A Soft Real-Time Communication Synthesis Method

Liao, Jian-Hong 08 September 2004 (has links)
In the era of system-on-chip, many hardware modules are embedded on a single chip. More messages communicated among on-chip modules. On-chip communication bandwidth is thus scaled up dramatically. It causes significant increase of routing area as well as relative reduction of system performance. It affects overall feasibility of a system chip. In order to solve the problem and meet the communication performance requirement of application systems. We need to consider factors that affect overall system performance and cost, communication resource allocation, message routing, and transmission control design. Thus, we proposed a soft real-time communication synthesis method. It applied the simulated annealing optimization method.. In the process, it carries out several tasks: calibration of dynamic communication cases, communication resource allocation, message routing path generation, and estimation of overall communication performance and system cost. In this research, we designed the experimental software of the communication synthesis method. We will perform experiments for its system evaluation to verify its effectiveness on system-on-chip designs.
149

Wheeled Inverted Pendulum with Embedded Component System : A Case Study

Oyama, Hiroshi, Ukai, Takayuki, Takada, Hiroaki, Azumi, Takuya January 2010 (has links)
No description available.
150

Evaluation of alterations in gene expression in MCF-7 cells induced by the agricultural chemicals Enable and Diazinon

Mankame, Tanmayi Pradeep 29 August 2005 (has links)
Steroid hormones, such as estrogen, are produced in one tissue and carried through the blood stream to target tissues in which they bind to highly specific nuclear receptors and trigger changes in gene expression and metabolism. Industrial chemicals, such as bisphenol A and many agricultural chemicals, including permethrin and fervalerate, are known to have estrogenic potential and therefore are estrogen mimics. Widely used agricultural chemicals, Enable (fungicide) and Diazinon (insecticide), were evaluated to examine their toxicity and estrogenicity. MCF-7 cells, an estrogen-dependent human breast cancer line, were utilized for this purpose. MCF-7 cells were treated with 0.033-3.3 ppb (ng/ml) of Enable and 0.3-67 ppm of Diazinon and gene expression was compared to that in untreated cells. Microarray analysis showed down-regulation of eight genes and up-regulation of thirty four genes in cells treated with 3.3 ppb of Enable, compared to untreated cells. Similarly, in cells treated with 67 ppm of Diazinon, there were three genes down-regulated and twenty seven genes up-regulated. For both chemicals, specific genes were selected for special consideration. RT-PCR confirmed results obtained from analysis of the microarray. These studies were designed to provide base-line data on gene expression-altering capacity of specific chemicals and will allow assessment of the deleterious effects caused by exposure to the aforementioned chemicals.

Page generated in 0.0503 seconds