• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1410
  • 370
  • 155
  • 140
  • 105
  • 92
  • 45
  • 32
  • 25
  • 18
  • 17
  • 15
  • 8
  • 6
  • 6
  • Tagged with
  • 2843
  • 1719
  • 809
  • 593
  • 503
  • 403
  • 399
  • 305
  • 294
  • 273
  • 269
  • 265
  • 242
  • 228
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Software Verification for a Custom Instrument using VectorCAST and CodeSonar

Ward, Christina Dawn 01 May 2011 (has links)
The goal of this thesis is to apply a structured verification process to a software package using a set of commercially available verification tools. The software package to be verified is adapted from a project that was developed to monitor an industrial machine at the Oak Ridge National Laboratory and includes two major subsystems. One subsystem, referred to as the Industrial Machine Monitoring Instrument (IMMI), connects to a machine and monitors operating parameters using common industrial sensors. A second subsystem, referred to as the Distributed Control System (DCS), interfaces between the IMMI and a personal computer, which provides a human machine interface using a hyperterminal. Both the IMMI and DCS are built around Freescale’s MC9S12XDP microcontroller using CodeWarrior as the Integrated Development Environment (IDE). The software package subjected to the structured verification process includes the main C code with its header file and the code for its interrupt events for the IMMI as well as the main C code for the DCS and its interrupt events. The software package is exposed to the scrutiny of two verification tools, VectorCAST and CodeSonar. VectorCAST is used to execute test cases and provide results for code coverage based on statement and branch coverage. CodeSonar is used to identify issues with the code at compile time such as allocation/deallocation issues, unsafe functions, and language use problems. The results from both verification tools are evaluated and necessary changes made to the software package. The modified software is then tested again with VectorCAST and CodeSonar. The final verification step is downloading the modified code into the IMMI and DCS microcontrollers and testing the overall system to ensure the expected results are achieved with hardware that is developed to simulate realistic signals.
402

Enforcing Temporal Constraints in Embedded Control Systems

Sandström, Kristian January 2002 (has links)
No description available.
403

Architecting and Modeling Automotive Embedded Systems

Larses, Ola January 2005 (has links)
Dealing properly with electronics and software will be a strong competitive advantage in the automotive sector in the near future. Electronics are driving current innovations and are at the same time becoming a larger part of the cost of the vehicle. In order to be successful as an automotive manufacturer, innovations must be introduced in the vehicle without compromising the final price tag. Also, the electronics has to compete with, and win over, the dependability of well known and proven mechanical solutions. Structure related costs can be reduced by designing a modular system, volume related costs can be reduced by utilizing fewer electronic control units that shares software performing a variety of functions. To achieve a modular system careful consideration must be applied in the architecture design process. Architecting is commonly referred to as an art, performed in a qualitative manner. This thesis provides a quantitative method for architecture design and evaluation targeting modular architectures. The architecture design method is based on a simple underlying information model. This model is extended through practical experiences in case studies to include support for configuration and documentation. An information model is a key enabler for managing the increasing complexity of automotive embedded systems. The model provides the basis for establishing the analyzable documentation that is required to ensure the dependability of the systems, specifically in terms of need for reliability, maintainability and safety. An information model supports traceability both within the product, across components, and also between different organizational units using different views of the product throughout the lifecycle. Further, some general issues of systems engineering and model based development related to the engineering of automotive embedded systems are discussed. Considerations for introducing a model based development process are covered. Also, the maturity of development processes and requirements on tools in an automotive context are evaluated. The ideas and methods presented in this thesis have been developed and tried in an industrial setting through a range of case studies. / QC 20101027
404

A new and improved control of a power electronic converter for stabilizing a variable speed generation system using an embedded microcontroller

Venkatswamy, Suresh 03 May 1991 (has links)
A new and improved stabilizer was developed for the variable speed generation (VSG) system. The VSG system exhibits periodic oscillations which sometimes leads to a loss of synchronism. After careful study, a simple but effective strategy to stabilize the system was implemented with real time digital feedback control. The VSG system consists of an engine, which is the prime mover, driving a doubly fed machine (DFM), which is the generator. The stator of the DFM is directly connected to the grid while the rotor is connected to the grid through a power electronic converter. The converter used in this study is a series resonance converter (SRC), but the proposed method may also be applied to other kinds of converters. The stabilizer senses the RPM of the engine, the feedback signal, and controls the rotor current amplitude and frequency of the doubly fed machine. Control was implemented using the 80C196KB microcontroller. The software consists of a mix of "C" and assembly language. Speed being an important factor in the implementation, care was taken to minimize the control loop times. The important features of the hardware and software developed for the stabilizer are: (1) 12 MHz controller board (2) Real time digital band pass filter (3) Instantaneous rotor speed measurement (4) Interrupt driven measurement and control loops (5) User defined setup parameters (6) IBM PC based real time serial communication The performance of the VSG system was studied with and without the stabilizer. A significant improvement in the stability of the system was noticed over the entire region of operation. / Graduation date: 1991
405

ECG compression for Holter monitoring

Ottley, Adam Carl 11 April 2007
Cardiologists can gain useful insight into a patient's condition when they are able to correlate the patent's symptoms and activities. For this purpose, a Holter Monitor is often used - a portable electrocardiogram (ECG) recorder worn by the patient for a period of 24-72 hours. Preferably, the monitor is not cumbersome to the patient and thus it should be designed to be as small and light as possible; however, the storage requirements for such a long signal are very large and can significantly increase the recorder's size and cost, and so signal compression is often employed. At the same time, the decompressed signal must contain enough detail for the cardiologist to be able to identify irregularities. "Lossy" compressors may obscure such details, where a "lossless" compressor preserves the signal exactly as captured.<p>The purpose of this thesis is to develop a platform upon which a Holter Monitor can be built, including a hardware-assisted lossless compression method in order to avoid the signal quality penalties of a lossy algorithm. <p>The objective of this thesis is to develop and implement a low-complexity lossless ECG encoding algorithm capable of at least a 2:1 compression ratio in an embedded system for use in a Holter Monitor. <p>Different lossless compression techniques were evaluated in terms of coding efficiency as well as suitability for ECG waveform application, random access within the signal and complexity of the decoding operation. For the reduction of the physical circuit size, a System On a Programmable Chip (SOPC) design was utilized. <p>A coder based on a library of linear predictors and Rice coding was chosen and found to give a compression ratio of at least 2:1 and as high as 3:1 on real-world signals tested while having a low decoder complexity and fast random access to arbitrary parts of the signal. In the hardware-assisted implementation, the speed of encoding was a factor of between four and five faster than a software encoder running on the same CPU while allowing the CPU to perform other tasks during the encoding process.
406

Model study of the hydraulics related to fish passage through embedded culverts

Garner, Megan 21 April 2011
Corrugated steel pipe (CSP) culverts are widely used as an economical alternative for conveying streams and small rivers through road embankments. While passage of the design flow is generally the primary goal for culvert design, consideration must also be given to maintaining connectivity within the aquatic environment for fish and other aquatic organisms. In Canada, the design criteria for fish passage through culverts are generally specified in terms of a maximum mean flow velocity corresponding to the weakest swimming fish expected to be found at a specific location. Studies have shown, however, that the velocity distribution within a CSP culvert may provide sufficient areas of lower velocity flow near the culvert boundary to allow for fish passage, even when the mean flow velocity may exceed a fishs swimming ability. Improved knowledge of the hydraulic conditions within CSP culverts, combined with research into fish swimming capabilities and preferences, may make it possible to better tailor culvert designs for fish passage while at the same time decreasing construction costs. To meet the requirements of regulators, various measures may be taken to reduce culvert flow velocities. Embedding, or setting the invert of a culvert below the normal stream bed elevation, is a simple and inexpensive method of increasing the flow area in a culvert flowing partially full, thereby decreasing flow velocity. Fish traversing through an embedded culvert benefit not only in terms of lower mean flow velocities, but also even lower flow velocities in the near boundary region. In the province of Saskatchewan culvert embedment is regularly used as a means to improve fish passage conditions. In this study, a laboratory scale model was used to study the velocity distribution within a non-embedded and embedded CSP culvert. An acoustic Doppler velocimeter was used to measure point velocities throughout the flow cross section at several longitudinal locations along the culvert. The hydraulic conditions were varied by changing the discharge, culvert slope and depth of embedment. The point velocity data were analyzed to determine patterns of velocity and turbulence intensity at each cross section, as well as along the length of the culvert. The results from the embedded culvert tests were compared with the results from the equivalent non-embedded tests, so that initial conclusions could be made regarding the use of embedment to improve conditions for fish passage. Analysis of the cross section velocity distributions showed that, even the non-embedded culvert had a significant portion of the flow area with flow velocity less than the mean velocity. The results from the embedded tests confirmed that embedding the culvert reduced the flow velocity throughout each cross section, although the effect was most significant for the cross sections located greater than one culvert diameter downstream from the inlet. This variation in effectiveness of embedment at reducing flow velocities is attributed to the length of the M1 backwater profile relative to the culvert length, and thus the differential increase in flow depth that occurred at each measurement location along the culvert. For both the non-embedded and embedded culvert the peak point magnitudes of turbulence intensity were found to be located near the culvert inlet where the flow was contracting. In terms of the cross section average turbulence intensity, in the non-embedded culvert turbulence increased with distance downstream from the inlet and was highest at the cross sections located near the culvert outlet. Embedding the culvert was found to either have no impact, or to slightly increase, the cross section average turbulence intensity near the inlet. Again, a result that is attributed to the tapering out of the M1 backwater profile at locations near the inlet under the flow conditions tested. However, beyond eight culvert diameters downstream from the inlet, embedment did result in lower cross section average turbulence intensity when compared to the non-embedded culvert. The measured velocity profiles for the non-embedded tests were found to compare well to the theoretical log-law velocity distribution using a ks value of between 0.012 m and 0.022 m, or approximately one to two times the corrugation amplitude, when the datum for analysis was considered to be located at the crest of the pipe corrugation. The cross section velocity distributions for the non-embedded tests compared very well to the model proposed by Ead et al. (2000). Based on this assessment, it appears that the Ead et al. model is potentially suitable for use in predicting the amount of the cross sectional area in a non-embedded culvert with flow velocity less than the design target for culvert fish passage design purposes. Overall, the results of the study confirm that, embedding a CSP culvert may be an effective way to improve fish passage conditions in terms of both flow velocity and turbulence intensity.
407

Embedded network firewall on FPGA

Ajami, Raouf 22 November 2010
The Internet has profoundly changed todays human being life. A variety of information and online services are offered by various companies and organizations via the Internet. Although these services have substantially improved the quality of life, at the same time they have brought new challenges and difficulties. The information security can be easily tampered by many threats from attackers for different purposes. A catastrophe event can happen when a computer or a computer network is exposed to the Internet without any security protection and an attacker can compromise the computer or the network resources for destructive intention.<p> The security issues can be mitigated by setting up a firewall between the inside network and the outside world. A firewall is a software or hardware network device used to enforce the security policy to the inbound and outbound network traffic, either installed on a single host or a network gateway. A packet filtering firewall controls the header field in each network data packet based on its configuration and permits or denies the data passing thorough the network.<p> The objective of this thesis is to design a highly customizable hardware packet filtering firewall to be embedded on a network gateway. This firewall has the ability to process the data packets based on: source and destination TCP/UDP port number, source and destination IP address range, source MAC address and combination of source IP address and destination port number. It is capable of accepting configuration changes in real time. An Altera FPGA platform has been used for implementing and evaluating the network firewall.
408

Hydraulic characteristics of embedded circular culverts

Magura, Christopher Ryan 14 September 2007 (has links)
This report details a physical modeling study to investigate the flow characteristics of circular corrugated structural plate (CSP) culverts with 10% embedment and projecting end inlets using a 0.62 m diameter corrugated metal pipe under a range of flows (0.064 m3/s to 0.254 m3/s) and slopes (0%, 0.5% and 1.0%). An automated sampling system was used to record detailed velocity measurements at cross-sections along the length of the model. The velocity data was then used to develop isovel plots and observations were made regarding the effect of water depth, average velocity, boundary roughness and inlet configuration on the velocity structure. Other key aspects examined include the distribution of shear velocity and equivalent sand roughness, Manning’s roughness, an evaluation of composite roughness calculation methods, secondary currents, area-velocity relationships, the effect of embedment on maximum discharge and a simulation of model results using HECRAS. Recommendations are presented to focus future research. / October 2007
409

Scratchpad-oriented address generation for low-power embedded VLIW processors

Talavera Velilla, Guillermo 15 October 2009 (has links)
Actualmente, los sistemas encastados están creciendo a un ritmo impresionante y proporcionan cada vez aplicaciones más sofisticadas. Un conjunto de creciente importancia son los sistemas multimedia portátiles de tiempo real y los sistemas de comunicación de procesado digital de señal: teléfonos móviles, PDAs, cámaras digitales, consolas portátiles de juegos, terminales multimedia, netbooks, etc. Estos sistemas requieren computación específica de alto rendimiento, generalmente con restricciones de tiempo real y calidad de servicio (Quality of Service - QoS), que han de ejecutarse con un nivel bajo de consumo para extender la vida de la batería y evitar el calentamiento del dispositivo. También se requiere una arquitectura flexible para satisfacer las restricciones del "time-to-market". En consecuencia, los sistemas encastados necesitan una solución programable, de bajo consumo y alta capacidad de computación para satisfacer todos los requerimientos.Las arquitecturas de tipo Very Long Instruction Word parecen una buena solución ya que proporcionan el suficiente rendimiento a bajo consumo con la programabilidad requerida. Estas arquitecturas se asientan sobre el esfuerzo del compilador para extraer el paralelismo disponible a nivel datos y de instrucciones para mantener las unidades computacionales ocupadas todo el rato. Con la densidad de los transistores doblando cada 18 meses, están emergiendo arquitecturas cada vez más complejas con un alto número de recursos computacionales ejecutándose en paralelo. Con esta, cada vez mayor, computación paralela, el acceso a los datos se está convirtiendo en el mayor impedimento que limita la posible extracción del paralelismo. Para aliviar este problema, en las actuales arquitecturas, una unidad especial trabaja en paralelo con los principales elementos computacionales para asegurar una eficiente transmisión de datos: la Unidad Generadora de Direcciones (Address Generator Unit), que puede implementarse de diferentes formas.El propósito de esta tesis es probar que optimizar el proceso de la generación de direcciones es una manera eficiente de solucionar el proceso de acceder a los datos al mismo tiempo que disminuye el tiempo de ejecución y el consumo de energía.Esta tesis evalúa la efectividad de los diferentes dispositivos que actualmente se usan en los sistemas encastados, argumenta el uso de procesadores de tipo "very long instruction word" y presenta la infraestructura de compilador y exploración arquitectural usada en los experimentos. Esta tesis también presenta una clasificación sistemática de los generadores de direcciones, un repaso de las diferentes técnicas de optimización actuales acorde con esta clasificación y una metodología, usando técnicas ya publicadas, sistemática y óptima que reduce gradualmente la energía necesitada. También se introduce el entorno de trabajo que permite una exploración arquitectural sistemática y los métodos usados para obtener una unidad de generación de direcciones. Los resultados de este unidad de generación de direcciones reconfigurable se muestran en diferentes aplicaciones de referencia (benchmarks) y la metodología sistemática se muestra en una aplicación completa real. / Nowadays Embedded Systems are growing at an impressive rate and provide more and more sophisticated applications. An increasingly important set of embedded systems are real-time portable multimedia and digital signal processing communication systems: cellular phones, PDAs, digital cameras, handheld gaming consoles, multimedia terminals, netbooks, etc. These systems require high performance specific computations, usually with real-time and Quality of Service (QoS) constraints, which should run at a low energy level to extend battery life and avoid heating. A flexible system architecture is also required to successfully meet short time-to-market restrictions. Hence, embedded systems need a programmable, low power and high performance solution in order to deal with these requirements.Very Long Instruction Word architectures seem a good solution for providing enough computational performance at low-power with the required programmability to speed the time-to-market. Those architectures rely on compiler effort to exploit the available instruction and data parallelism to keep the data path busy all the time. With the density of transistors doubling each 18 months, more and more complex architectures with a high number of computational resources running in parallel are emerging. With this increasing parallel computation, the access to data is becoming the main bottleneck that limits the available parallelism. To alleviate this problem, in current embedded architectures, a special unit works in parallel with the main computing elements to ensure efficient feed and storage of the data: the Address Generator Unit, which comes in many flavors. The purpose of this dissertation is to prove that optimizing the process of address generation is an effective way of solving the problem of accessing data while decreasing execution time and energy consumption.As a first step, this thesis evaluates the effectiveness of different state-of-the-art devices commonly used in the embedded domain, argues for the use of very long instruction word processors and presents the compiler and architecture framework used for our experiments. This thesis also presents a systematic classification of address generators, a review of literature according to the classification of the different optimizations on the address generation process and a step-wise methodology that gradually reduces energy reusing techniques that already have been published. The systematic architecture exploration framework and methods used to obtain a reconfigurable address generation unit are also introduced.Results of the reconfigurable address generator unit are shown on several benchmarks and applications, and the complete step-wise methodology is demonstrated on a real-life example.
410

Tools and theory to improve data analysis

Grolemund, Garrett 24 July 2013 (has links)
This thesis proposes a scientific model to explain the data analysis process. I argue that data analysis is primarily a procedure to build un- derstanding and as such, it dovetails with the cognitive processes of the human mind. Data analysis tasks closely resemble the cognitive process known as sensemaking. I demonstrate how data analysis is a sensemaking task adapted to use quantitative data. This identification highlights a uni- versal structure within data analysis activities and provides a foundation for a theory of data analysis. The model identifies two competing chal- lenges within data analysis: the need to make sense of information that we cannot know and the need to make sense of information that we can- not attend to. Classical statistics provides solutions to the first challenge, but has little to say about the second. However, managing attention is the primary obstacle when analyzing big data. I introduce three tools for managing attention during data analysis. Each tool is built upon a different method for managing attention. ggsubplot creates embedded plots, which transform data into a format that can be easily processed by the human mind. lubridate helps users automate sensemaking out- side of the mind by improving the way computers handle date-time data. Visual Inference Tools develop expertise in young statisticians that can later be used to efficiently direct attention. The insights of this thesis are especially helpful for consultants, applied statisticians, and teachers of data analysis.

Page generated in 0.0578 seconds