121 |
Embedded early vision techniques for efficient background modeling and midground detectionValentine, Brian Evans 26 March 2010 (has links)
An automated vision system performs critical tasks in video surveillance, while decreasing costs and increasing efficiency. It can provide high quality scene monitoring without the limitations of human distraction and fatigue. Advances in embedded processors, wireless networks, and imager technology have enabled computer vision systems to be deployed pervasively in stationary surveillance monitors, hand-held devices, and vehicular sensors. However, the size, weight, power, and cost requirements of these platforms present a great challenge in developing real-time systems. This dissertation explores the development of background modeling algorithms for surveillance on embedded platforms. Our contributions are as follows: - An efficient pixel-based adaptive background model, called multimodal mean, which produces results comparable to the widely used mixture of Gaussians multimodal approach, at a much reduced computational cost and greater control of occluded object persistence. - A novel and efficient chromatic clustering-based background model for embedded vision platforms that leverages the color uniformity of large, permanent background objects to yield significant speedups in execution time. - A multi-scale temporal model for midground analysis which provides a means to "tune-in" to changes in the scene beyond the standard background/foreground framework, based on user-defined temporal constraints.
Multimodal mean reduces instruction complexity with the use of fixed integer arithmetic and periodic long-term adaptation that occurs once every d frames. When combined with fixed thresholding, it performs 6.2 times faster than the mixture of Gaussians method while using 18% less storage. Furthermore, fixed thresholding compares favorably to standard deviation thresholding with a percentage difference in error less than five percent when used on scenes with stable lighting conditions and modest multimodal activity.
The chromatic clustering-based approach to optimized background modeling takes advantage of the color distributions in large permanent background objects, such as a road, building, or sidewalk, to speedup execution time. It abstracts their colors to a small color palette and suppresses their adaptation during processing. When run on a representative embedded platform it reduces storage usage by 58% and increases runtime execution by 45%.
Multiscale temporal modeling for midground analysis presents a unified approach for scene analysis that can be applied to several application domains. It extends scene analysis from the standard background/foreground framework to one that includes a temporal midground object saliency window that is defined by the user. When applied to stationary object detection, the midground model provides accurate results at low sampling frame rates (~ 1 fps) while using only 18 Mbytes of storage and 15 Mops/sec processing throughput.
|
122 |
Dataflow-processing element for a cognitive sensor platformMcDermott, Mark William, active 2014 26 June 2014 (has links)
Cognitive sensor platforms are the next step in the evolution of intelligent sensor platforms. These platforms have the capability to reason about both their external environment and internal conditions and to modify their processing behavior and configuration in a continuing effort to optimize their operational life and functional utility. The addition of cognitive capabilities is necessary for unattended sensor systems as it is generally not feasible to routinely replace the battery or the sensor(s). This platform provides a chassis that can be used to compose embedded sensor systems from composable elements. The composable elements adhere to a synchronous data flow (SDF) protocol to communicate between the elements using channels. The SDF protocol provides the capability to easily compose heterogeneous systems of multiple processing elements, sensor elements, debug elements and communications elements. The processing engine for this platform is a Dataflow-Processing Element (DPE) that receives, processes and dispatches SDF data tokens. The DPE is specifically designed to support the processing of SDF tokens using microcoded actors where programs are assembled by instantiating actors in a graphical modeling tool and verifying that the SDF protocol is adhered to. / text
|
123 |
Generating RTL for microprocessors from architectural and microarchitectural descriptionBansal, Ankit Sajjan Kumar 17 June 2011 (has links)
Designing a modern processor is a very complex task. Writing the entire design using a hardware description language (like Verilog) is time consuming and difficult to verify. There exists a split architecture/microarchitecture description technique, in which, the description of any hardware can be divided into two orthogonal descriptions: (a) an architectural contract between the user and the implementation, and (b) a microarchitecture which describes the implementation of the architecture. The main aim of this thesis is to build realistic processors using this technique. We have designed an in-order and an
out-of-order superscalar processor using the split-description compiler. The backend of this compiler is another contribution of this thesis. / text
|
124 |
Cloud Computing for Digital LibrariesPoulo, Lebeko Bearnard 01 May 2013 (has links)
Information management systems (digital libraries/repositories, learning management systems, content management systems) provide key technologies for the storage, preservation and dissemination of knowledge in its various forms, such as research documents, theses and dissertations, cultural heritage documents and audio files. These systems can make use of cloud computing to achieve high levels of scalability, while making services accessible to all at reasonable infrastructure costs and on-demand.
This research aims to develop techniques for building scalable digital information management systems based on efficient and on-demand use of generic grid-based technologies such as cloud computing. In particular, this study explores the use of existing cloud computing resources offered by some popular cloud computing vendors such as Amazon Web Services. This involves making use of Amazon Simple Storage Service (Amazon S3) to store large and increasing volumes of data, Amazon Elastic Compute Cloud (Amazon EC2) to provide the required computational power and Amazon SimpleDB for querying and data indexing on Amazon S3.
A proof-of-concept application comprising typical digital library services was developed and deployed in the cloud environment and evaluated for scalability when the demand for more data and services increases. The results from the evaluation show that it is possible to adopt cloud computing for digital libraries in addressing issues of massive data handling and dealing with large numbers of concurrent requests. Existing digital library systems could be migrated and deployed into the cloud.
|
125 |
Nonlinear Finite Element Analysis and Post-processing of Reinforced Concrete Structures under Transient Creep StrainJodai, Akira 28 November 2013 (has links)
A suite of NLFEA programs, VecTor, has been developed at the University of Toronto. However, this software still requires the development of other functions to execute some types of analyses. One of the required functions is the consideration of transient creep strain in the heat transfer analysis. Moreover, there is a strong need to develop a general graphics-based post-processor applicable to VecTor programs.
The first objective of this thesis is to develop a function considering the effect of the transient creep strain, because it can have significant influence on the behaviour of concrete under elevated temperatures. The second purpose of this thesis is to construct the new analysis visualization features compatible with entire suite of VecTor programs. As the result, the modified post-processor, JANUS, has had its abilities expanded significantly.
|
126 |
Nonlinear Finite Element Analysis and Post-processing of Reinforced Concrete Structures under Transient Creep StrainJodai, Akira 28 November 2013 (has links)
A suite of NLFEA programs, VecTor, has been developed at the University of Toronto. However, this software still requires the development of other functions to execute some types of analyses. One of the required functions is the consideration of transient creep strain in the heat transfer analysis. Moreover, there is a strong need to develop a general graphics-based post-processor applicable to VecTor programs.
The first objective of this thesis is to develop a function considering the effect of the transient creep strain, because it can have significant influence on the behaviour of concrete under elevated temperatures. The second purpose of this thesis is to construct the new analysis visualization features compatible with entire suite of VecTor programs. As the result, the modified post-processor, JANUS, has had its abilities expanded significantly.
|
127 |
Driver Circuit for an Ultrasonic MotorOcklind, Henrik January 2013 (has links)
To make a camera more user friendly or let it operate without an user the camera objective needs to be able to put thecamera lens in focus. This functionality requires a motor of some sort, due to its many benefits the ultrasonic motor is apreferred choice. The motor requires a driving circuit to produce the appropriate signals and this is what this thesis is about.Themain difficulty that needs to be considered is the fact that the ultrasonic motor is highly non-linear.This paper will give a brief walk through of how the ultrasonic motor works,its pros and cons and how to control it. How thedriving circuit is designed and what role the various components fills. The regulator is implemented in C-code and runs on amicro processor while the actual signal generation is done on a CPLD. The report ends with a few suggestions of how toimprove the system should the presented solution not perform at a satisfactory level.
|
128 |
FUNCTIONAL ENHANCEMENT AND APPLICATIONS DEVELOPMENT FOR A HYBRID, HETEROGENEOUS SINGLE-CHIP MULTIPROCESSOR ARCHITECTUREHegde, Sridhar 01 January 2004 (has links)
Reconfigurable and dynamic computer architecture is an exciting area of research that is rapidly expanding to meet the requirements of compute intense real and non-real time applications in key areas such as cryptography, signal/radar processing and other areas. To meet the demands of such applications, a parallel single-chip heterogeneous Hybrid Data/Command Architecture (HDCA) has been proposed. This single-chip multiprocessor architecture system is reconfigurable at three levels: application, node and processor level. It is currently being developed and experimentally verified via a three phase prototyping process. A first phase prototype with very limited functionality has been developed. This initial prototype was used as a base to make further enhancements to improve functionality and performance resulting in a second phase virtual prototype, which is the subject of this thesis. In the work reported here, major contributions are in further enhancing the functionality of the system by adding additional processors, by making the system reconfigurable at the node level, by enhancing the ability of the system to fork to more than two processes and by designing some more complex real/non-real time applications which make use of and can be used to test and evaluate enhanced and new functionality added to the architecture. A working proof of concept of the architecture is achieved by Hardware Description Language (HDL) based development and use of a Virtual Prototype of the architecture. The Virtual Prototype was used to evaluate the architecture functionality and performance in executing several newly developed example applications. Recommendations are made to further improve the system functionality.
|
129 |
A Soho Router Implementation On Motorola Mcf5272 Processor And Uclinux Operating SystemKacar, Mehmet Nazir 01 January 2003 (has links) (PDF)
Recently, various special purpose processors have been developed and are
frequently being used for different specialized tasks. Prominent among these are the
communication processors, which are generally used within an embedded system
environment. Such processors can run relatively advanced and general purpose
operating systems such as uCLinux, which is a freely available embedded Linux
distribution. In this work, a prototype SoHo (Small office / Home office) router is
designed and implemented using Motorola MCF5272 as the core communication
processor and uCLinux as the operating system. The implementation relies purely
on the existing hardware resources of an available development board and the
publicly available open source utilities of uCLinux. The overall development
process provides an embedded system implementation and configuration example.
|
130 |
Improving processor efficiency by exploiting common-case behaviors of memory instructionsSubramaniam, Samantika 02 January 2009 (has links)
Processor efficiency can be described with the help of a number of desirable
effects or metrics, for example, performance, power, area, design
complexity and access latency.
These metrics serve as valuable tools used in designing new processors
and they also act as effective standards for comparing current processors.
Various factors impact the efficiency of modern out-of-order processors
and one important factor is the manner in which instructions are processed
through the processor pipeline.
In this dissertation research, we study the impact of load and store
instructions
(collectively known as memory instructions) on processor efficiency,
and show how to improve efficiency by exploiting common-case or
predictable patterns in the behavior of memory instructions.
The memory behavior patterns that we focus on in our research are
the predictability of memory dependences, the predictability in
data forwarding patterns,
predictability in instruction criticality and conservativeness
in resource allocation and
deallocation policies.
We first design a scalable and high-performance memory dependence
predictor and then apply
accurate memory dependence prediction to improve the efficiency of
the fetch engine of a simultaneous multi-threaded processor.
We then use predictable data forwarding patterns to eliminate power-hungry
hardware in the processor with no loss in performance. We then move to
studying instruction criticality to improve
processor efficiency. We study the behavior of critical load instructions
and propose applications that can be optimized using predictable,
load-criticality
information. Finally, we explore conventional techniques for
allocation and deallocation
of critical structures that process memory instructions and propose new
techniques to optimize the same. Our new designs have the potential to reduce
the power and the area required by processors significantly without losing
performance, which lead to efficient designs of processors.
|
Page generated in 0.0636 seconds