• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • Tagged with
  • 32
  • 32
  • 32
  • 32
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Angle Perception On Autostereoscopic Displays

Karaman, Ersin 01 July 2009 (has links) (PDF)
Stereoscopic displays provide 3D vision usually with the help of additional equipment such as shutter glasses and head gears. As a new stereoscopic display technology, autostereoscopic 3D Displays provide 3D vision without additional equipment. Previous studies of depth and distance estimation with autostereoscopic displays indicate the users do not exhibit better performance in 3D. Yet, they claim 3D displays provide higher immersiveness. In this study, perception of the angle of a 3D shape is investigated by comparing 2D, 3D and Real perception cases. An experiment is conducted using an autostereoscopic 3D display. Forty people have participated in the experiment. They were asked to estimate the vertex angle and draw the projections of the object from two different viewpoints. It is found that users can better estimate the angles on a cone when viewed from the top on an autostereoscopic display. This may contribute positively to 3D understanding of the scene. Results revealed that participants make more accurate angle estimation in autostereoscopic 3D displays than in traditional 2D displays. In general, the participants&rsquo / angle drawings were slightly higher than their angle estimations. Moreover, the participants overestimated 35, 65 and 90 degree angles and underestimated 115 degree angle in autostereoscopic 3D display.
12

Parallelization Of Functional Flow To Predict Protein Functions

Akkoyun, Emrah 01 February 2011 (has links) (PDF)
Protein-protein interaction networks provide important information about what the biological function of proteins whose roles are unknown might be in a cell. These interaction networks were analyzed by a variety of approaches by running them on a single computer and the roles of the proteins identified were used to predict the function of the proteins unidentified. The functional flow is an approach that takes the network connectivity, distance effect, topology of the network with local and global views into account. With these advantages, that the functional flow produces more accurate results on the prediction of protein functions was presented by the previos conducted researches. However, the application implemented for this approach could not be practically applied on the large and complex network produced for the complex species because of memory limitation. The purpose of this thesis is to provide a new application be implemented on the high computing performance where the application can be scaled on the large data sets. Therefore, Hadoop, one of the open source map/reduce environments, was installed on 18 hosts each of which has eight cores. Method / the first map/reduce job distributes the protein interaction network as a format which allows parallel distributed computing to all the worker nodes, the other map/reduce job generates flows for each known protein function and the role of the proteins unidentified are predicted by accumulating all of these generated flows. It has been observed in the experiments we performed that the application requiring high performance computing can be decomposed into worker nodes efficiently and the application can provide better performance as the resources increase.
13

Resonctructing Signaling Pathways From Rnai Data Using Genetic Algorithms

Ayaz, Eyup Serdar 01 September 2011 (has links) (PDF)
Cell signaling is a list of chemical reactions that are used for intercellular and intracellular communication. Signaling pathways denote these chemical reactions in a systematic manner. Today, many signaling pathways are constructed by several experimental methods. However there are still many communication skills of cells that are needed to be discovered. RNAi system allows us to see the phenotypes when some genes are removed from living cells. By observing these phenotypes, we can build signaling pathways. However it is costly in terms of time and space complexity. Furthermore, there are some interactions RNAi data cannot distinguish that results in many different signaling pathways all of which are consistent with the RNAi data. In this thesis, we combine genetic algorithms with some greedy approaches to find the topologies that fit the Boolean single knock-down RNAi experiments. Our algorithm finds nearly all of the results for small inputs in a few minutes. It can also find a significant number of results for larger inputs. Then we eliminate isomorphic topologies from the output set of this algorithm. This process fairly reduces the number of topologies. Afterwards we offer a simple scheme for suggesting new wet-lab RNAi experiments which is necessary to have a complete approach to find the actual network. Also we describe a new activation and deactivation model for pathways when the activation of the phenotype after RNAi knock-downs are given as weighted variables. We adapt the first genetic algorithm approach to this model for directly finding the most possible network.
14

Two Versions Of The Stream Cipher Snow

Yilmaz, Erdem 01 December 2004 (has links) (PDF)
Two versions of SNOW, which are word-oriented stream ciphers proposed by P. Ekdahl and T. Johansson in 2000 and 2002, are studied together with cryptanalytic attacks on the first version. The reported attacks on SNOW1.0 are the &ldquo / guess-and-determine attack&rdquo / s by Hawkes and Rose and the &ldquo / distinguishing attack&rdquo / by Coppersmith, Halevi and Jutla in 2002. A review of the distinguishing attack on SNOW1.0 is given using the approach made by the designers of SNOW in 2002 on another cipher, SOBER-t32. However, since the calculation methods for the complexities of the attack are different, the values found with the method of the designers of SNOW are higher than the ones found by Coppersmith, Halevi and Jutla. The correlations in the finite state machine that make the distinguishing attack possible and how these correlations are affected by the operations in the finite state machine are investigated. Since the substitution boxes (S-boxes) play an important role in destroying the correlation and linearity caused by Linear Feedback Shift Register, the s-boxes of the two versions of SNOW are examined for the criteria of Linear Approximation Table (LAT), Difference Distribution Table (DDT) and Auto-correlation Table distributions. The randomness tests are performed using NIST statistical test suite for both of the ciphers. The results of the tests are presented.
15

Template Based Image Watermarking In The Fractional Fourier Domain

Gokozan, Tolga 01 February 2005 (has links) (PDF)
One of the main features of digital technology is that the digital media can be duplicated and reproduced easily. However, this allows unauthorized and illegal use of information, i.e. data piracy. To protect digital media against illegal attempts a signal, called watermark, is embedded into the multimedia data in a robust and invisible manner. A watermark is a short sequence of information, which contains owner&rsquo / s identity. It is used for evidence of ownership and copyright purposes. In this thesis, we use fractional Fourier transformation (FrFT) domain, which combines space and spatial frequency domains, for watermark embedding and implement well-known secure spread spectrum watermarking approach. However, the spread spectrum watermarking scheme is fragile against geometrical attacks such as rotation and scaling. To gain robustness against geometrical attacks, an invisible template is inserted into the watermarked image in Fourier transformation domain. The template contains no information in itself but it is used to detect the transformations undergone by the image. Once the template is detected, these transformations are inverted and the watermark signal is decoded. Watermark embedding is performed by considering the masking characteristics of the Human Visual System, to ensure the watermark invisibility. In addition, we implement watermarking algorithms, which use different transformation domains such as discrete cosine transformation domain, discrete Fourier transformation domain and discrete wavelet transformation domain for watermark embedding. The performance of these algorithms and the FrFT domain watermarking scheme is experimented against various attacks and distortions, and their robustness are compared.
16

Instrumented Monitoring And Dynamic Testing Of Metu Cable Stayed Pedestrian Bridge And Comparisons Against The Analytical Model Simulations

Ozerkan, Taner 01 July 2005 (has links) (PDF)
This study includes structural instrumentation and monitoring of a 48.5 meters long cable-stayed pedestrian bridge located on EskiSehir road near METU campus. The objectives of the study are (1) to monitor the bridge responses during erection and operation stages so that the strain changes are determined during important events such as transportation, lifting, cabling, mid-support removal, slab concrete pouring and tile placement, (2) to determine existing cable forces using vibration frequencies, and (3) comparison of the experimental and analytical results for model updating. A total of 10 vibrating wire type strain gages were used for strain readings in steel members. The readings are taken at various stages of construction at every 10 to 30 minutes intervals. The bridge responses were monitored about three months and large strain changes in the order of 300 to 500 micro-strain were recorded during important events (e.g., transportation, lifting, cabling, mid-support removal, deck cover placement). The deck and tower natural vibration frequency measurements are conducted in two main directions. Two different FE models are constructed using two levels of complexity. FEM analysis results are compared against measured natural frequencies of the bridge and tower. Simplistic analytical model is modified to include temporary support removal in order to perform staged construction simulation and investigate cable force variations. Actual cable tensile forces are obtained using measured cable natural vibration frequencies. The cable frequencies are measured using a CR10X data logger and a PCB 393C accelerometer. Existing cable forces are compared against analytical simulations and symmetrically placed cables
17

Predicting The Disease Of Alzheimer (ad) With Snp Biomarkers And Clinical Data Based Decision Support System Using Data Mining Classification Approaches

Erdogan, Onur 01 September 2012 (has links) (PDF)
Single Nucleotide Polymorphisms (SNPs) are the most common DNA sequence variations where only a single nucleotide (A, T, C, G) in the human genome differs between individuals. Besides being the main genetic reason behind individual phenotypic differences, SNP variations have the potential to exploit the molecular basis of many complex diseases. Association of SNPs subset with diseases and analysis of the genotyping data with clinical findings will provide practical and affordable methodologies for the prediction of diseases in clinical settings. So, there is a need to determine the SNP subsets and patients&rsquo / clinical data which is informative for the prediction or the diagnosis of the particular diseases. So far, there is no established approach for selecting the representative SNP subset and patients&rsquo / clinical data, and data mining methodology that is based on finding hidden and key patterns over huge databases. This approach have the highest potential for extracting the knowledge from genomic datasets and to select the number of SNPs and most effective clinical features for diseases that are informative and relevant for clinical diagnosis. In this study we have applied one of the widely used data mining classification methodology: &ldquo / decision tree&rdquo / for associating the SNP Biomarkers and clinical data with the Alzheimer&rsquo / s disease (AD), which is the most common form of &ldquo / dementia&rdquo / . Different tree construction parameters have been compared for the optimization, and the most efficient and accurate tree for predicting the AD is presented.
18

Data Envelopment Analysis And Malmquist Total Factor Productivity (tfp) Index: An Application To Turkish Automotive Industry

Karaduman, Alper 01 September 2006 (has links) (PDF)
This thesis shows how the relative efficiency of automotive companies can be evaluated and how the changes in productivity of these companies by time can be observed. There are 17 companies in the analysis which are the main automotive manufacturers of Turkish automotive industry. A method called stepwise approach is used to determine the input and output factors. The two input variables used are the company&rsquo / s Payment for Raw Materials and Components and Payment for Wages and Insurances of Employees / the three output variables are Domestic Sales, Exports and Capacity Usage. The panel data that covers the time period between years 2001 and 2005 is obtained from OSD (Automotive Manufacturers Association). The efficiency analysis is performed according to basic Data Envelopment Analysis (DEA) models which are Charnes, Cooper and Rhodes (CCR) models and Banker, Charnes and Cooper (BCC) models. The software LINGO 10 is used for solving the linear programming models. After finding the overall efficiency, technical efficiency and scale efficiency of each company for each year, the changes in the efficiencies are analyzed by using Malmquist Total Factor Productivity (TFP) Index. The results are illustrated by the help of many tables and graphs for better understanding. When the results in tables and graphs are analyzed, the negative effect of 2001 economic crisis on automotive industry can be observed. Besides, it is seen that the efficiency changes by time show variance from company to company because they produce 7 types of vehicles and there are important differences between them such as production technology, market, demand, etc.
19

A Method For Robust Design Of Products Or Processes With Categorical Response

Erdural, Serkan 01 December 2006 (has links) (PDF)
In industrial processes decreasing variation is very important while achieving the targets. For manufacturers, finding out optimal settings of product and process parameters that are capable of producing desired results under great conditions is crucial. In most cases, the quality response is measured on a continuous scale. However, in some cases, the desired quality response may be qualitative (categorical). There are many effective methods to design robust products/process through industrial experimentation when the response variable is continuous. But methods proposed so far in the literature for robust design with categorical response variables have various limitations. This study offers a simple and effective method for the analysis of categorical response data for robust product or process design. This method handles both location and dispersion effects to explore robust settings in an effective way. The method is illustrated on two cases: A foam molding process design and an iron-casting process design.
20

Weapon-target Allocation And Scheduling For Air Defense With Time Varying Hit Probabilities

Gulez, Taner 01 June 2007 (has links) (PDF)
In this thesis, mathematical modeling and heuristic approaches are developed for surface-to-air weapon-target allocation problem with time varying single shot hit probabilities (SSHP) against linearly approaching threats. First, a nonlinear mathematical model for the problem is formulated to maximize sum of the weighted survival probabilities of assets to be defended. Next, nonlinear objective function and constraints are linearized. Time varying SSHP values are approximated with appropriate closed forms and adapted to the linear model obtained. This model is tested on different scenarios and results are compared with those of the original nonlinear model. It is observed that the linear model is solved much faster than the nonlinear model and produces reasonably good solutions. It is inferred from the solutions of both models that engagements should be made as late as possible, when the threats are closer to the weapons, to have SSHP values higher. A construction heuristic is developed based on this scheme. An improvement heuristic that uses the solution of the construction heuristic is also proposed. Finally, all methods are tested on forty defense scenarios. Two fastest solution methods, the linear model and the construction heuristic, are compared on a large scenario and proposed as appropriate solution techniques for the weapon-target allocation problems.

Page generated in 0.0463 seconds