411 |
Utility Analysis And Computer Simulation Of Rfid Technologies In The Supply Chain Applications Of Production SystemsBolatli, Yurtseven 01 December 2009 (has links) (PDF)
In this thesis, the feasibility of deploying RFID technologies in the case of &ldquo / lowvolume
high-value&rdquo / products is considered by focusing on the production processes
of a real company. First, the processes of the company are examined and associated
problems are determined. Accordingly, a simulation of the current situation is
constructed by using the discrete event simulation technique, in order to obtain an
accurate model. In addition to modeling the current situation, this simulation model
provides a flexible platform to analyze different scenarios and their effects on the
company production. Next, various scenarios including RFID technology
deployment are examined, and their results are compared with respect to profitanalysis which takes into consideration the changes in the production, work in
process (WIP) inventory, stockouts, transportation and initial investment. Finally, the
analysis of the results and conclusions are given in order to provide guidance for
companies with &ldquo / low-volume high-value&rdquo / product portfolios.
|
412 |
Federated Simulation Of Network Performance Using Packet Flow ModelingDemirci, Turan 01 February 2010 (has links) (PDF)
Federated approach for the distributed simulation of a network, is an alternative method that aims to combine existing simulation models and software together using a Run Time Infrastructure (RTI), rather than building the whole simulation from scratch. In this study, an approach that significantly reduces the inter-federate communication load in federated simulation of communication networks is proposed. Rather than communicating packet-level information among federates, characteristics of packet flows in individual federates are dynamically identified and communicated. Flow characterization is done with the Gaussian Mixtures Algorithm (GMA) using a Self Organizing Mixture Network (SOMN) technique. In simulations of a network partitioned into eight federates in space parallel manner, it is shown that significant speedups are achieved with the proposed approach without unduly compromising accuracy.
|
413 |
Camera Motion Blur And Its Effect On Feature DetectorsUzer, Ferit 01 September 2010 (has links) (PDF)
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous
robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most
common usage scenario. In this case, the motion of the camera due to the motion of the
moving platform as well as the resulting shocks or vibrations causes a number of distortions
on video frame sequences. Two most important ones are the frame-to-frame changes of the
line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these
two, namely motion blur plays a particularly dominant role in determining the performance of
many vision algorithms used in mobile robotics. It is caused by the relative motion between
the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly
an undesirable phenomenon in computer vision not only because it degrades the quality of
images but also causes other feature extraction procedures to degrade or fail. Although there
are many studies on feature based tracking, navigation, object recognition algorithms in the
computer vision and robotics literature, there is no comprehensive work on the effects of
motion blur on different image features and their extraction.
In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our
attention on motion blur induced degradation of a number of popular feature detectors. We
investigate and characterize this degradation using video sequences captured by the vision
system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and
Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are
most commonly used for mobile robotics applications. The performance degradation of these
feature detectors due to motion blur are categorized to analyze the effect of legged locomotion
on feature performance for perception. These analysis results are obtained as a first step
towards the stabilization and restoration of video sequences captured by our experimental
legged robotic platform and towards the development of motion blur robust vision system.
|
414 |
An Fpga Based High Performance Optical Flow Hardware Design For Autonomous Mobile Robotic PlatformsGultekin, Gokhan Koray 01 September 2010 (has links) (PDF)
Optical flow is used in a number of computer vision applications. However, its use in mobile robotic applications is limited because of the high computational complexity involved and the limited availability of computational resources on such platforms. The lack of a hardware that is capable of computing optical flow vector field in real time is a factor that prevents the mobile robotics community to efficiently utilize some successful techniques presented in computer vision literature. In this thesis work, we design and implement a high performance FPGA hardware with a small footprint and low power consumption that is capable of providing over-realtime optical flow data and is hence suitable for this application domain. A well known differential optical flow algorithm presented by Horn & / Schunck is selected for this implementation. The complete hardware design of the proposed system is described in details. We also discuss the design alternatives and the selected approaches together with a discussion of the selection procedure. We present the performance analysis of the proposed hardware in terms of computation speed, power consumption and accuracy. The designed hardware is tested with some of the available test sequences that are frequently used for performance evaluations of the optical flow techniques in literature. The proposed hardware is capable of computing optical flow vector field on 256x256 pixels images in 3.89ms which corresponds to a processing speed of 257 fps. The results obtained from FPGA implementation are compared with a floating-point implementation of the same algorithm realized on a PC hardware. The obtained results show that the hardware implementation achieved a superior performance in terms of speed, power consumption and compactness while there is minimal loss of accuracy due to the fixed point implementation.
|
415 |
Analysis And Classification Of Spelling Paradigm Eeg Data And An Attempt For Optimization Of Channels UsedYildirim, Asil 01 December 2010 (has links) (PDF)
Brain Computer Interfaces (BCIs) are systems developed in order to control devices by using only brain signals. In BCI systems, different mental activities to be performed by the users are associated with different actions on the device to be controlled. Spelling Paradigm is a BCI application which aims to construct the words by finding letters using P300 signals recorded via channel electrodes attached to the diverse points of the scalp. Reducing the letter detection error rates and increasing the speed of letter detection are crucial for Spelling Paradigm. By this way, disabled people can express their needs more easily using this application.
In this thesis, two different methods, Support Vector Machine (SVM) and AdaBoost, are used for classification in the analysis. Classification and Regression Trees is used as the weak classifier of the AdaBoost. Time-frequency domain characteristics of P300 evoked potentials are analyzed in addition to time domain characteristics. Wigner-Ville Distribution is used for transforming time domain signals into time-frequency domain. It is observed that classification results are better in time domain. Furthermore, optimum subset of channels that models P300 signals with minimum error rate is searched. A method that uses both SVM and AdaBoost is proposed to select channels. 12 channels are selected in time domain with this method. Also, effect of dimension reduction is analyzed using Principal Component Analysis (PCA) and AdaBoost methods.
|
416 |
Predicting The Effect Of Hydrophobicity Surface On Binding Affinity Of Pcp-like Compounds Using Machine Learning MethodsYoldas, Mine 01 April 2011 (has links) (PDF)
This study aims to predict the binding affinity of the PCP-like compounds by means of molecular hydrophobicity. Molecular hydrophobicity is an important property which affects the binding affinity of molecules. The values of molecular hydrophobicity of molecules are obtained on three-dimensional coordinate system. Our aim is to reduce the number of points on the hydrophobicity surface of the molecules. This is modeled by using self organizing maps (SOM) and k-means clustering. The feature sets obtained from SOM and k-means clustering
are used in order to predict binding affinity of molecules individually. Support vector regression and partial least squares regression are used for prediction.
|
417 |
A Comparative Evaluation Of SuperErbay, Fulya 01 May 2011 (has links) (PDF)
In this thesis, it is proposed to get the high definition color images by using super &ndash / resolution algorithms. Resolution enhancement of RGB, HSV and YIQ color domain images is presented. In this study, three solution methods are presented to improve the resolution of HSV color domain images. These solution methods are suggested to beat the color artifacts on super resolution image and decrease the computational complexity in HSV domain applications. PSNR values are measured and compared with the results of other two color domain experiments. In RGB color space, super &ndash / resolution algorithms are applied three color channels (R, G, B) separately and PSNR values are measured. In YIQ color domain, only Y channel is processed with super resolution algorithms because Y channel is luminance component of the image and it is the most important channel to improve the resolution of the image in YIQ color domain. Also, the third solution method suggested for HSV color domain offers applying super resolution algorithm to only value channel. Hence, value channel carry brightness data of the image. The results are compared with the YIQ color domain experiments. During the experiments, four different super resolution algorithms are used that are Direct Addition, MAP, POCS and IBP. Although, these methods are widely used reconstruction of monochrome images, here they are used for resolution enhancement of color images. Color super resolution performances of these algorithms are tested.
|
418 |
Implementation Of A Low-cost Smart Camera Apllication On A Cots SystemBaykent, Hayri Kerem 01 January 2012 (has links) (PDF)
The objective of this study is to implement a low-cost smart camera application on a
Commercial off the Shelf system that is based on Texas Instrument&rsquo / s DM3730
System on Chip processor. Although there are different architectures for smart
camera applications, ARM plus DSP based System on Chip architecture is selected
for implementation because of its different core abilities. Beagleboard-XM platform
that has an ARM plus DSP based System on Chip processor is chosen as
Commercial off the Shelf platform. During this thesis, firstly to start-up the
Commercial off the Shelf platform the design steps of porting an embedded Linux to
ARM core of System on Chip processor is described. Then design steps that are
necessary for implementation of smart camera applications on both ARM and DSP
cores in parallel are given in detail. Furthermore, the real-time image processing
performance of the Beagleboard-xM platform for the smart camera applications is
evaluated with simple implementations.
|
419 |
Web Service Testing For Domain Specific Web Service Discovery FrameworkUtku, Selma 01 February 2012 (has links) (PDF)
The reliability of web services is important for both users and other service providers, with which they are in interaction. Thus, to guarantee reliability of the web services that are invoked and integrated at runtime, automatic testing of web services is needed.
In web service testing, different test cases for web services are generated. The most important issue is to generate the most appropriate value for input parameters of web services at runtime. In this thesis, we developed a method for automatic web service testing that uses semantics dependency-based and data mutation-based techniques to analyze web services and generate different test cases. Thus, we both check whether the services function correctly by generating appropriate input values from different data sources and check robustness of web services by generating random and error-prone data inputs. With respect to the behaviors of web services, the test values are calculated and saved to the database for each web service.
|
420 |
Real-time Stereo To Multi-view Video ConversionCigla, Cevahir 01 July 2012 (has links) (PDF)
A novel and efficient methodology is presented for the conversion of stereo to multi-view video in order to address the 3D content requirements for the next generation 3D-TVs and auto-stereoscopic multi-view displays. There are two main algorithmic blocks in such a conversion system / stereo matching and virtual view rendering that enable extraction of 3D information from stereo video and synthesis of inexistent virtual views, respectively. In the intermediate steps of these functional blocks, a novel edge-preserving filter is proposed that recursively constructs connected support regions for each pixel among color-wise similar neighboring pixels. The proposed recursive update structure eliminates pre-defined window dependency of the conventional approaches, providing complete content adaptibility with quite low computational complexity. Based on extensive tests, it is observed that the proposed filtering technique yields better or competitive results against some leading techniques in the literature. The proposed filter is mainly applied for stereo matching to aggregate cost functions and also handles occlusions that enable high quality disparity maps for the stereo pairs. Similar to box filter paradigm, this novel technique yields matching of arbitrary-shaped regions in constant time. Based on Middlebury benchmarking, the proposed technique is currently the best local matching technique in the literature in terms of both precision and complexity. Next, virtual view synthesis is conducted through depth image based rendering, in which reference color views of left and right pairs are warped to the desired virtual view using the estimated disparity maps. A feedback mechanism based on disparity error is introduced at this step to remove salient distortions for the sake of visual quality. Furthermore, the proposed edge-aware filter is re-utilized to assign proper texture for holes and occluded regions during view synthesis. Efficiency of the proposed scheme is validated by the real-time implementation on a special graphics card that enables parallel computing. Based on extensive experiments on stereo matching and virtual view rendering, proposed method yields fast execution, low memory requirement and high quality outputs with superior performance compared to most of the state-of-the-art techniques.
|
Page generated in 0.0716 seconds