• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 901
  • 650
  • 5
  • 2
  • Tagged with
  • 1560
  • 1560
  • 85
  • 75
  • 69
  • 68
  • 59
  • 58
  • 57
  • 55
  • 54
  • 53
  • 53
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Systematic study of near-infrared intersubband absorption of polar and semipolar GaN/AlN quantum well

Machhadani, Houssaine, Beeler, M, Sakr, S, Warde, E, Kotsar, Y, Tchernycheva, M, Chauvat, M P., Ruterana, P, Nataf, G, De Mierry, Ph, Monroy, E, Julien, F H. January 2013 (has links)
We report on the observation of intersubband absorption in GaN/AlN quantum well superlattices grown on (11<img src="http://jap.aip.org/math_images/JAPIAU/vol_113/iss_14/143109_1/143109_1-ft-d4e1233.gif" />2)-oriented GaN. The absorption is tuned in the 1.5–4.5 μm wavelength range by adjusting the well thickness. The semipolar samples are compared with polar samples with identical well thickness grown during the same run. The intersubband absorption of semipolar samples shows a significant red shift with respect to the polar ones due to the reduction of the internal electric field in the quantum wells. The experimental results are compared with simulations and confirm the reduction of the polarization discontinuity along the growth axis in the semipolar case. The absorption spectral shape depends on the sample growth direction: for polar quantum wells the intersubband spectrum is a sum of Lorentzian resonances, whereas a Gaussian shape is observed in the semipolar case. This dissimilarity is explained by different carrier localization in these two cases. / <p>Funding Agencies|EC FET-OPEN project Unitride|233950|EU ERC-StG under project TeraGaN|278428|French National Research Agency under project COSNI|ANR-08-BLAN-0298-01|</p>
172

Focus of attention and gaze control for robot vision

Westelius, Carl-Johan January 1995 (has links)
This thesis deals with focus of attention control in active vision systems. A framework for hierarchical gaze control in a robot vision system is presented, and an implementation for a simulated robot is described. The robot is equipped with a heterogeneously sampled imaging system, a fovea, resembling the spatially varying resolution of a human retina. The relation between foveas and multiresolution image processing as well as implications for image operations are discussed. A stereo algorithm based on local phase differences is presented both as a stand alone algorithm and as a part of a robot vergence control system. The algorithm is fast and can handle large disparities and maintaining subpixel accuracy. The method produces robust and accurate estimates of displacement on synthetic as well as real life stereo images. Disparity filter design is discussed and a number of filters are tested, e.g. Gabor filters and lognorm quadrature filters. A design method for disparity filters having precisely one phase cycle is also presented. A theory for sequentially defined data modified focus of attention is presented. The theory is applied to a preattentive gaze control system consisting of three cooperating control strategies. The first is an object finder that uses circular symmetries as indications for possible object and directs the fixation point accordingly. The second is an edge tracker that makes the fixation point follow structures in the scene. The third is a camera vergence control system which assures that both eyes are fixating on the same point. The coordination between the strategies is handled using potential fields in the robot parameter space. Finally, a new focus of attention method for disregarding filter responses from already modelled structures is presented. The method is based on a filtering method, normalized convolution, originally developed for filtering incomplete and uncertain data. By setting the certainty of the input data to zero in areas of known or predicted signals, a purposive removal of operator responses can be obtained. On succeeding levels, image features from these areas become 'invisible' and consequently do not attract the attention of the system. This technique also allows the system to effectively explore new events. By cancelling known, or modeled, signals the attention of the system is shifted to new events not yet described.
173

Learning Multidimensional Signal Processing

Borga, Magnus January 1998 (has links)
The subject of this dissertation is to show how learning can be used for multidimensional signal processing, in particular computer vision. Learning is a wide concept, but it can generally be defined as a system’s change of behaviour in order to improve its performance in some sense. Learning systems can be divided into three classes: supervised learning, reinforcement learning and unsupervised learning. Supervised learning requires a set of training data with correct answers and can be seen as a kind of function approximation. A reinforcement learning system does not require a set of answers. It learns by maximizing a scalar feedback signal indicating the system’s performance. Unsupervised learning can be seen as a way of finding a good representation of the input signals according to a given criterion. In learning and signal processing, the choice of signal representation is a central issue. For high-dimensional signals, dimensionality reduction is often necessary. It is then important not to discard useful information. For this reason, learning methods based on maximizing mutual information are particularly interesting. A properly chosen data representation allows local linear models to be used in learning systems. Such models have the advantage of having a small number of parameters and can for this reason be estimated by using relatively few samples. An interesting method that can be used to estimate local linear models is canonical correlation analysis (CCA). CCA is strongly related to mutual information. The relation between CCA and three other linear methods is discussed. These methods are principal component analysis (PCA), partial least squares (PLS) and multivariate linear regression (MLR). An iterative method for CCA, PCA, PLS and MLR, in particular low-rank versions of these methods, is presented. A novel method for learning filters for multidimensional signal processing using CCA is presented. By showing the system signals in pairs, the filters can be adapted to detect certain features and to be invariant to others. A new method for local orientation estimation has been developed using this principle. This method is significantly less sensitive to noise than previously used methods. Finally, a novel stereo algorithm is presented. This algorithm uses CCA and phase analysis to detect the disparity in stereo images. The algorithm adapts filters in each local neighbourhood of the image in a way which maximizes the correlation between the filtered images. The adapted filters are then analysed to find the disparity. This is done by a simple phase analysis of the scalar product of the filters. The algorithm can even handle cases where the images have different scales. The algorithm can also handle depth discontinuities and give multiple depth estimates for semi-transparent images.
174

Reinforcement Learning and Distributed Local Model Synthesis

Landelius, Tomas January 1997 (has links)
Reinforcement learning is a general and powerful way to formulate complex learning problems and acquire good system behaviour. The goal of a reinforcement learning system is to maximize a long term sum of instantaneous rewards provided by a teacher. In its extremum form, reinforcement learning only requires that the teacher can provide a measure of success. This formulation does not require a training set with correct responses, and allows the system to become better than its teacher. In reinforcement learning much of the burden is moved from the teacher to the training algorithm. The exact and general algorithms that exist for these problems are based on dynamic programming (DP), and have a computational complexity that grows exponentially with the dimensionality of the state space. These algorithms can only be applied to real world problems if an efficient encoding of the state space can be found. To cope with these problems, heuristic algorithms and function approximation need to be incorporated. In this thesis it is argued that local models have the potential to help solving problems in high-dimensional spaces and that global models have not. This is motivated with the biasvariance dilemma, which is resolved with the assumption that the system is constrained to live on a low-dimensional manifold in the space of inputs and outputs. This observation leads to the introduction of bias in terms of continuity and locality. A linear approximation of the system dynamics and a quadratic function describing the long term reward are suggested to constitute a suitable local model. For problems involving one such model, i.e. linear quadratic regulation problems, novel convergence proofs for heuristic DP algorithms are presented. This is one of few available convergence proofs for reinforcement learning in continuous state spaces. Reinforcement learning is closely related to optimal control, where local models are commonly used. Relations to present methods are investigated, e.g. adaptive control, gain scheduling, fuzzy control, and jump linear systems. Ideas from these areas are compiled in a synergistic way to produce a new algorithm for heuristic dynamic programming where function parameters and locality, expressed as model applicability, are learned on-line. Both top-down and bottom-up versions are presented. The emerging local models and their applicability need to be memorized by the learning system. The binary tree is put forward as a suitable data structure for on-line storage and retrieval of these functions.
175

Signal Processing for Robust and Real-Time fMRI With Application to Brain Computer Interfaces

Eklund, Anders January 2010 (has links)
It is hard to find another research field than functional magnetic resonance imaging (fMRI) that combines so many different areas of research. Without the beautiful physics of MRI we would not have any images to look at in the first place. To get images with good quality it is necessary to fully understand the concepts of the frequency domain. The analysis of fMRI data requires understanding of signal processing and statistics and also knowledge about the anatomy and function of the human brain. The resulting brain activity maps are used by physicians and neurologists in order to plan surgery and to increase their understanding of how the brain works. This thesis presents methods for signal processing of fMRI data in real-time situations. Real-time fMRI puts higher demands on the signal processing, than conventional fMRI, since all the calculations have to be made in realtime and in more complex situations. The result from the real-time fMRI analysis can for example be used to look at the subjects brain activity in real-time, for interactive planning of surgery or understanding of brain functions. Another possibility is to use the result in order to change the stimulus that is given to the subject, such that the brain and the computer can work together to solve a given task. These kind of setups are often called brain computer interfaces (BCI). Two BCI are presented in this thesis. In the first BCI the subject was able to balance a virtual inverted pendulum by thinking of activating the left or right hand or resting. In the second BCI the subject in the MR scanner was able to communicate with a person outside the MR scanner, through a communication interface. Since head motion is common during fMRI experiments it is necessary to apply image registration to align the collected volumes. To do image registration in real-time can be a challenging task, therefore how to implement a volume registration algorithm on a graphics card is presented. The power of modern graphic cards can also be used to save time in the daily clinical work, an example of this is also given in the thesis. Finally a method for calculating and incorporating a structural based certainty in the analysis of the fMRI data is proposed. The results show that the structural certainty helps to remove false activity that can occur due to head motion, especially at the edge of the brain.
176

A Research Platform for Embodied Visual Object Recognition

Wallenberg, Marcus, Forssén, Per-Erik January 2010 (has links)
We present in this paper a research platform for development and evaluation of embodied visual object recognition strategies. The platform uses a stereoscopic peripheral-foveal camera system and a fast pan-tilt unit to perform saliency-based visual search. This is combined with a classification framework based on the bag-of-features paradigm with the aim of targeting, classifying and recognising objects. Interaction with the system is done via typed commands and speech synthesis. We also report the current classification performance of the system.
177

Simulation of Surrounding Vehicles in Driving Simulators

Olstam, Johan January 2009 (has links)
Driving simulators and microscopic traffic simulation are important tools for making evaluations of driving and traffic. A driving simulator is de-signed to imitate real driving and is used to conduct experiments on driver behavior. Traffic simulation is commonly used to evaluate the quality of service of different infrastructure designs. This thesis considers a different application of traffic simulation, namely the simulation of surrounding vehicles in driving simulators. The surrounding traffic is one of several factors that influence a driver's mental load and ability to drive a vehicle. The representation of the surrounding vehicles in a driving simulator plays an important role in the striving to create an illusion of real driving. If the illusion of real driving is not good enough, there is an risk that drivers will behave differently than in real world driving, implying that the results and conclusions reached from simulations may not be transferable to real driving. This thesis has two main objectives. The first objective is to develop a model for generating and simulating autonomous surrounding vehicles in a driving simulator. The approach used by the model developed is to only simulate the closest area of the driving simulator vehicle. This area is divided into one inner region and two outer regions. Vehicles in the inner region are simulated according to a microscopic model which includes sub-models for driving behavior, while vehicles in the outer regions are updated according to a less time-consuming mesoscopic model. The second objective is to develop an algorithm for combining autonomous vehicles and controlled events. Driving simulators are often used to study situations that rarely occur in the real traffic system. In order to create the same situations for each subject, the behavior of the surrounding vehicles has traditionally been strictly controlled. This often leads to less realistic surrounding traffic. The algorithm developed makes it possible to use autonomous traffic between the predefined controlled situations, and thereby get both realistic traffc and controlled events. The model and the algorithm developed have been implemented and tested in the VTI driving simulator with promising results.
178

Matarverk för Camcoil

Larsson, Patrik, Perätalo, Thom January 2008 (has links)
By our employer Camatec Industriteknik AB have we been given the assignment to continue with the previous work “Design of strip feeder”. Our goal with this work is to come up with a design basis for a strip feeder. Functionally a strip feeder consists of two rollers which with the aid of the friction between the strip and the coating on the roller create a strip force that pulls the strip through the working progress. Demands on this project: • 10 – 600 mm in strip width and 0,1-1 mm in strip thickness. • Minimum 30m/min in production speed. • A maximum strip force on 30-40 kN. • The cost of the strip feeder’s components is to be considered of economy and quality. The need of a strip feeder occurs when the coil doesn’t manage to feed or pull the strip before or after a working process. The strip can otherwise be exposed by a permanent yield deformation or a change in the strip material if it’s pulled trough the working process only by the on – and of-coils. In these processes can also high friction occur which can give trouble with the feeding speed of the strip if not the coils are complemented by a strip feeder. Out of the feasibility study we concluded to continue with a smaller S-feeder with pressure. The pressure is achieved by two hydraulic cylinders and a leveler on the upper frame. For proceeding design of the strip feeder so will it be designed for 1,5 against the highest possible strip force that have been calculated during the feasibility study. Also bearings, hydraulic cylinders, motor and gearbox have been designed for these force reactions. With the coating material Slitan 96 Shore A on the rollers the theoretical strip force be 50400 N during a pressure of 64915 N if the strip feeder is used as an S-feeder with pressure. If it’s necessary to use a simple strip feeder the theoretical strip force will be 12900 N during a pressure of 64915 N. The strip force will be limited by the coating materials capabilities when a big pressure will risk the strip to be baked in to the polyurethane and the coating will be damage by the stresses that occur by a bigger pressure.
179

Förslag till ombyggnation av klipp- och bockningsmaskin / Rebuliding of a cutting- and bending machine

Haglund, Johan, Gustafsson, Joel January 2008 (has links)
This thesis work has been performed on request by SWT (Scandinavia WeldTech AB). SWT manufactures, develop and sell ready to assemble building systems for concrete rafter beam applications. The beams in this system consist of a u-beam that is welded to a flange. To be able to fill the beams with concrete there is holes in the beams top end. The holes are made in a “punching machine”.  These holes are cut out and the remaining steel piece is bent down in the u-beam. The problem today is that the machine only can make holes in beams with a height up to 340 mm. Now SWT wants a machine that can handle beams up to 500 mm high. Our task in this work was to make a feasibility study on what needs to be done to rebuild the machine. Besides the demand with higher beams we also looked at things to improve. Another task was to investigate the needs for cutting in beams made by thicker steel plates. In this work we used some of the theories described by David G. Ullman for the concept generation and evaluation. When generating the concepts we choose to make a concept of each subpart of the machine. Then we put the winning concepts together to form a final solution for the whole machine. When looking at the wear on the tools we could see that much could be done by making the control of the cutting column easier when adjusting the tool. To see what demands there were for cutting in thicker plates we had to come up with a theory for calculation off cutting and bending. Then we had to compare our theory with the reality, and that was done by measuring the pressure on the working cylinder when the machine was operating. Those results showed that our theory was working for 5 mm thickness but not for 4 mm thickness. There is however some uncertainness that makes our measured results not quite reliable for 4 mm steel. When it comes to the demand to manage higher beams we came to the conclusion that the best alternative is to manufacture a new frame. This is also suggested for the plates in the feeding unit. But we recommend that the existing cylinders and roof is used. For the problem on how to make the tool adjustments easier we came up with two solutions. The alternative with manual adjusting and assembly demands a reconstruction of the hydraulic system so that its functions meets the demands that are needed to realize the proposed solution. The advantage with this proposition is that the operator always has the possibility to affect the position of the tools so that the right cutting column is received. Our other suggestion is to install permanent steering guiding so the lower die gets in right position when mounting it. This proposal does not mean any bigger reconstruction but it does not make it possible to adjust the position of the cutting tool. To work proper both our solutions demands that the lower dies is modified. To make the mounting of the upper die easier we came up with two solutions that in short terms means that the die is held up before mounting. This means that the operator can have both his hands free when tighten the bolts. If thicker plate should be cut it will lead to large reconstructions of the hydraulic system with new cylinders and more. This due to the maximum pressing force that the machine is capable of is too small. But for a small change in thickness it might be enough to lower the force by reducing the friction in terms of better lubrication and modified tools.
180

Redesign of readout driver using FPGA / Modernisering av datautläsningsenhet mha FPGA

Klöfver, Per January 2008 (has links)
In the ATLAS experiment now being finished at CERN in Geneva, bunches ofprotons will collide at a rate of 40 million times per second. Over 40 TB of datawill be generated every second. In order to reduce the amount of data to a moremanageable level, a system of triggers is put in place. The trigger system mustquickly evaluate if the data from a collision indicates that an interesting physicalprocess took place, in which case the data are to be stored for further analysis.    ATLAS uses a trigger system with three steps. The first step, the First LevelTrigger, is responsible of reducing the rate from 40MHz to 75KHz, and is donecompletely in hardware. It receives a new event every 25 ns, and must decidewithin 2.5 μs whether the event should be passed on to the next trigger level.    In this document is the redesign of two subsystems of the First Level Triggerdescribed. When prototypes were made 5-10 years ago, both subsystems used 7PLDs. Today, the same logic could be fitted in one FPGA, and because of theflexibility gained by having all logic in a single FPGA, both subsystems could berealized with the same PCB design.

Page generated in 0.0412 seconds