21 |
Face pose estimation in monocular imagesShafi, Muhammad January 2010 (has links)
People use orientation of their faces to convey rich, inter-personal information. For example, a person will direct his face to indicate who the intended target of the conversation is. Similarly in a conversation, face orientation is a non-verbal cue to listener when to switch role and start speaking, and a nod indicates that a person has understands, or agrees with, what is being said. Further more, face pose estimation plays an important role in human-computer interaction, virtual reality applications, human behaviour analysis, pose-independent face recognition, driver s vigilance assessment, gaze estimation, etc. Robust face recognition has been a focus of research in computer vision community for more than two decades. Although substantial research has been done and numerous methods have been proposed for face recognition, there remain challenges in this field. One of these is face recognition under varying poses and that is why face pose estimation is still an important research area. In computer vision, face pose estimation is the process of inferring the face orientation from digital imagery. It requires a serious of image processing steps to transform a pixel-based representation of a human face into a high-level concept of direction. An ideal face pose estimator should be invariant to a variety of image-changing factors such as camera distortion, lighting condition, skin colour, projective geometry, facial hairs, facial expressions, presence of accessories like glasses and hats, etc. Face pose estimation has been a focus of research for about two decades and numerous research contributions have been presented in this field. Face pose estimation techniques in literature have still some shortcomings and limitations in terms of accuracy, applicability to monocular images, being autonomous, identity and lighting variations, image resolution variations, range of face motion, computational expense, presence of facial hairs, presence of accessories like glasses and hats, etc. These shortcomings of existing face pose estimation techniques motivated the research work presented in this thesis. The main focus of this research is to design and develop novel face pose estimation algorithms that improve automatic face pose estimation in terms of processing time, computational expense, and invariance to different conditions.
|
22 |
Design and analysis of an integrated pulse modulated S-band power amplifier in gallium nitride processSedlock, Steve January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / William B. Kuhn / The design of power amplifiers in any semi-conductor process is not a trivial exercise and it is often encountered that the simulated solution is significantly different than the results obtained. Oscillatory phenomena occurring either in-band or out of band and sometimes at subharmonic intervals can render a design useless. Other less apparent effects such as jumps, hysteresis and continuous spectrum, often referred to as chaos, can also invalidate a design. All of these problems might have been identified through a more rigorous approach to stability analysis.
Designing for stability is probably the one area of amplifier design that receives the least amount of attention but incurs the most catastrophic of effects if it is not performed properly. Other parameters such as gain, power output, frequency response and even matching may have suitable mitigation paths. But the lack of stability in an amplifier has no mitigating path. In addition to the loss of the design there are the increased production cycle costs, costs involved with investigating and resolving the problem and the costs involved with schedule slips or delays resulting from it.
The Linville or Rollett stability criteria that many microwave engineers follow and rely exclusively on is not sufficient by itself to ensure a stable and robust design. It will be shown that the belief that unconditional stability is obtained through an analysis of the scattering matrix S to determine if K>1 and [delta][supscript]s<1 can fail and other tools must be used to validate circuit stability.
With the emphasis being placed on stability, a 1W pulse modulated S-band power amplifier is designed using a battery of analysis tools in addition to the standard Linville or Rollett criteria to rigorously confirm the stability of the circuit. Test measurements are then presented to confirm the stability of the design and illustrate the results.
The research shown contributes to the state of the art by offering a detailed approach to stability design and then applying the techniques to the design of a 1W pulse modulated S-band power amplifier demonstrating the first with 20 nanosecond pulse width switching and single digit nanosecond rise and fall times at 1 Watt power levels.
|
23 |
Power allocation and cell association in cellular networksHo, Danh Huu 26 August 2019 (has links)
In this dissertation, power allocation approaches considering path loss, shadowing,
and Rayleigh and Nakagami-m fading are proposed. The goal is to improve power
consumption, and energy and throughput efficiency based on user target signal to interference plus noise ratio (SINR) requirements and an outage probability threshold.
First, using the moment generating function (MGF), the exact outage probability
over Rayleigh and Nakagami-m fading channels is derived. Then upper and lower
bounds on the outage probability are derived using the Weierstrass, Bernoulli and
exponential inequalities. Second, the problem of minimizing the user power subject
to outage probability and user target SINR constraints is considered. The corresponding power allocation problems are solved using Perron-Frobenius theory and
geometric programming (GP). A GP problem can be transformed into a nonlinear
convex optimization problem using variable substitution and then solved globally and
efficiently by interior point methods. Then, power allocation problems for throughput
maximization and energy efficiency are proposed. As these problems are in a convex
fractional programming form, parametric transformation is used to convert the original problems into subtractive optimization problems which can be solved iteratively.
Simulation results are presented which show that the proposed approaches are better
than existing schemes in terms of power consumption, throughput, energy efficiency
and outage probability.
Prioritized cell association and power allocation (CAPA) to solve the load balancing issue in heterogeneous networks (HetNets) is also considered in this dissertation.
A Hetnet is a group of macrocell base stations (MBSs) underlaid by a diverse set
of small cell base stations (SBSs) such as microcells, picocells and femtocells. These
networks are considered to be a good solution to enhance network capacity, improve
network coverage, and reduce power consumption. However, HetNets are limited
by the disparity of power levels in the different tiers. Conventional cell association
approaches cause MBS overloading, SBS underutilization, excessive user interference
and wasted resources. Satisfying priority user (PU) requirements while maximizing
the number of normal users (NUs) has not been considered in existing power allocation algorithms. Two stage CAPA optimization is proposed to address the prioritized
cell association and power allocation problem. The first stage is employed by PUs
and NUs and the second stage is employed by BSs. First, the product of the channel
access likelihood (CAL) and channel gain to interference plus noise ratio (GINR) is considered for PU cell association while network utility is considered for NU cell association. Here, CAL is defined as the reciprocal of the BS load. In CAL and GINR
cell association, PUs are associated with the BSs that provide the maximum product
of CAL and GINR. This implies that PUs connect to BSs with a low number of users
and good channel conditions. NUs are connected to BSs so that the network utility
is maximized, and this is achieved using an iterative algorithm. Second, prioritized
power allocation is used to reduce power consumption and satisfy as many NUs with
their target SINRs as possible while ensuring that PU requirements are satisfied.
Performance results are presented which show that the proposed schemes provide fair
and efficient solutions which reduce power consumption and have faster convergence
than conventional CAPA schemes. / Graduate
|
24 |
Normalized Convolution Network and Dataset Generation for Refining Stereo Disparity MapsCranston, Daniel, Skarfelt, Filip January 2019 (has links)
Finding disparity maps between stereo images is a well studied topic within computer vision. While both classical and machine learning approaches exist in the literature, they frequently struggle to correctly solve the disparity in regions with low texture, sharp edges or occlusions. Finding approximate solutions to these problem areas is frequently referred to as disparity refinement, and is usually carried out separately after an initial disparity map has been generated. In the recent literature, the use of Normalized Convolution in Convolutional Neural Networks have shown remarkable results when applied to the task of stereo depth completion. This thesis investigates how well this approach performs in the case of disparity refinement. Specifically, we investigate how well such a method can improve the initial disparity maps generated by the stereo matching algorithm developed at Saab Dynamics using a rectified stereo rig. To this end, a dataset of ground truth disparity maps was created using equipment at Saab, namely a setup for structured light and the stereo rig cameras. Because the end goal is a dataset fit for training networks, we investigate an approach that allows for efficient creation of significant quantities of dense ground truth disparities. The method for generating ground truth disparities generates several disparity maps for every scene measured by using several stereo pairs. A densified disparity map is generated by merging the disparity maps from the neighbouring stereo pairs. This resulted in a dataset of 26 scenes and 104 dense and accurate disparity maps. Our evaluation results show that the chosen Normalized Convolution Network based method can be adapted for disparity map refinement, but is dependent on the quality of the input disparity map.
|
25 |
Variações de área das geleiras da Colômbia e da Venezuela entre 1985 e 2015, com dados de sensoriamento remoto / Glaciers area variations in Colombia and Venezuela between 1985 and 2015, with remote sensing dataRekowsky, Isabel Cristiane January 2016 (has links)
Nesse estudo foram mapeadas e mensuradas as variações de área, elevação mínima e orientação das geleiras da Colômbia e da Venezuela (trópicos internos), entre os anos 1985-2015. Para o mapeamento das áreas das geleiras foram utilizadas como base imagens Landsat, sensores TM, ETM+ e OLI. Às imagens selecionadas foi aplicado o Normalized Difference Snow Index (NDSI), no qual são utilizadas duas bandas em que o alvo apresenta comportamento espectral oposto ou com características bem distintas: bandas 2 e 5 dos sensores TM e ETM+ e bandas 3 e 6 do sensor OLI. Os dados de elevação e orientação das massas de gelo foram obtidos a partir do Modelo Digital de Elevação SRTM (Shuttle Radar Topography Mission – v03). Em 1985, a soma das áreas das sete geleiras estudadas correspondia a 92,84 km², enquanto no último ano estudado (2015/2016) esse valor passou para 36,97 km². A redução de área ocorreu em todas as geleiras analisadas, com taxas de retração anual variando entre 2,49% a.a. e 8,46% a.a. Houve retração das áreas de gelo localizadas em todos os pontos cardeais considerados, bem como, elevação da altitude nas frentes de geleiras. Além da perda de área ocorrida nas menores altitudes, onde a taxa de ablação é mais elevada, também se observou retração em alguns topos, evidenciado pela ocorrência de altitudes menores nos anos finais do estudo, em comparação com os anos iniciais. Como parte das geleiras colombianas está localizada sobre vulcões ativos, essas áreas sofrem influência tanto de fatores externos, quanto de fatores internos, podendo ocorrer perdas de massa acentuadas causadas por erupção e/ou terremoto. / In this study, glaciers located in Colombia and Venezuela (inner tropics) were mapped between 1985-2015. The area of these glaciers was measured and the variations that occurred in each glacier were compared to identify whether the glacier was growing or shrinking. The minimum elevation of the glaciers fronts and the aspect of the glaciers were analyzed. The glaciers areas ware obtained by the use of Landsat images, TM, ETM+ and OLI sensors. The Normalized Difference Snow Index (NDSI) was applied to the selected images, in which two bands were used, where the ice mass has opposite (or very different) spectral behavior: bands 2 and 5 from sensors TM and ETM+, and bands 3 and 6 from sensors OLI. The elevation and the aspect data of the glaciers were obtained from SRTM (Shuttle Radar Topography Mission – v03) Digital Elevation Model. In 1985/1986, the sum of the areas of the seven studied glaciers corresponded to 92.84 km², while in the last year analyzed (2015/2016), this value shrank to 36.97 km². The area shrinkage occurred in all the glaciers that were mapped, with annual decline rates ranging from 2.49%/year to 8.46%/year. It is also possible to observe a decrease of the ice covered in all aspects considered, as well as an elevation in all glaciers fronts. In addition to the area loss occurred at lower altitudes, where the ablation rate is higher than in higher altitudes, shrinkage in some mountain tops was also present, which is evidenced by the occurrence of lower maximum elevations in the final years of the study, when compared with the initial years. Considering that part of the Colombian’s glaciers are located on active volcanoes, these areas are influenced by external and internal factors, and the occurrence of volcanic eruption and/or earthquake can cause sharp mass losses.
|
26 |
The Box-Cox Transformation:A Review曾能芳, Zeng, Neng-Fang Unknown Date (has links)
The use of transformation can usually simplify the analysis of data,
especially when the original observations deviate from the underlying
assumption of linear model. Box-Cox transformation receives much more
attention than others. In this dissertation,. we will review the theory
about the estimation, hypotheses test on transformation parameter and
about the sensitivity of the linear model parameters in Box-Cox
transformation. Monte Carlo simulation is used to study the performance
of the transformations. We also display whether Box-Cox transformation
make the transformed observations satisfy the assumption of linear model
actually.
|
27 |
Detection of interesting areas in images by using convexity and rotational symmetries / Detection of interesting areas in images by using convexity and rotational symmetriesKarlsson, Linda January 2002 (has links)
<p>There are several methods avaliable to find areas of interest, but most fail at detecting such areas in cluttered scenes. In this paper two methods will be presented and tested in a qualitative perspective. The first is the darg operator, which is used to detect three dimensional convex or concave objects by calculating the derivative of the argument of the gradient in one direction of four rotated versions. The four versions are thereafter added together in their original orientation. A multi scale version is recommended to avoid the problem that the standard deviation of the Gaussians, combined with the derivatives, controls the scale of the object, which is detected. </p><p>Another feature detected in this paper is rotational symmetries with the help of approximative polynomial expansion. This approach is used in order to minimalize the number and sizes of the filters used for a correlation of a representation of the orientation and filters matching the rotational symmetries of order 0, 1 and 2. With this method a particular type of rotational symmetry can be extracted by using both the order and the orientation of the result. To improve the method’s selectivity a normalized inhibition is applied on the result, which causes a much weaker result in the two other resulting pixel values when one is high. </p><p>Both methods are not enough by themselves to give a definite answer to if the image consists of an area of interest or not, since several other things have these types of features. They can on the other hand give an indication where in the image the feature is found.</p>
|
28 |
Comparing Vegetation Cover in the Santee Experimental Forest, South Carolina (USA), Before and After Hurricane Hugo: 1989-2011Cosentino, Giovanni R 03 May 2013 (has links)
Hurricane Hugo struck the coast of South Carolina on September 21, 1989 as a category 4 hurricane on the Saffir-Simpson Scale. Landsat Thematic mapper was utilized to determine the extent of damage experienced at the Santee Experimental Forest (SEF) (a part of Francis Marion National Forest) in South Carolina. Normalized Difference Vegetation Index (NDVI) and the change detection techniques were used to determine initial forest damage and to monitor the recovery over a 22-year period following Hurricane Hugo. According to the results from the NDVI analysis the SEF made a full recovery after a 10-year period. The remote sensing techniques used were effective in identifying the damage as well as the recovery.
|
29 |
A case study on the risk-adjusted- financial performance of The Vice Fund : The risk-adjusted-financial performance of this fund will be evaluate through a comparison with an other mutual fund having a different investment strategy and with two benchmarks.Bernardin, Arthur, Dumoussaud, Camille January 2013 (has links)
Nowadays, there is a debate about the possibility that sin stocks bring higher returns than other ones to the investors. This thesis is a case study on a mutual fund: The Vice Fund. This US fund has a specific investment strategy: it invests in sin stocks. We compared this mutual fund to The Timothy Fund because they have similar characteristics such as – date of inception, total assets, home country and investment universe, expect the investment strategy. Indeed, The Vice Fund invests in sin stocks and The Timothy Fund does not. Two benchmarks are also used in the study: the S&P 500 Index as a domestic benchmark and the MSCI World Index as an international benchmark. This thesis is a case study using a deductive approach on a quantitative ground. The study is done on ten years long from 2003 to 2012. We divided the entire period into three different sub-periods depending of the S&P 500 Index trend. The first and the last sub-periods are bullish and the second one is bearish. In order to analyse both the financial performances and the risks of The Vice Fund we use several tools. We calculated returns and risk-adjusted ratios: the Treynor’s ratio, the Sharpe’s ratio and the Jensen’s ratio. Because these ratios are less accurate in bearish markets, we calculated the normalized Sharpe ratio by doing linear regressions and we also calculated the modified Sharpe ratio. In order to perform these calculations, we used DataStream as a database to obtain prices and dividends for the two mutual funds and the prices for the two benchmarks. We got also the one-month T-bill to have a risk-free rate. We found that The Vice Fund had a better average returns performance whatever the market conditions over the period studied. However the difference between weekly results with The Timothy Plan Fund and the benchmarks is not statistically significant. The risk- adjusted ratios confirmed the superiority of the risk-adjusted financial performance of the sin fund.
|
30 |
Estimating nitrogen fertilizer requirements of canola (Brassica napus L.) using sensor-based estimates of yield potential and crop response to nitrogenHolzapfel, Christopher Brian 18 January 2008 (has links)
The feasibility of using optical sensors and non-nitrogen limiting reference crops to determine post-emergent nitrogen fertilizer requirements of canola was evaluated. Normalized difference vegetation index was well suited for estimating yield potential and nitrogen status. Although sensor-based nitrogen management was generally agronomically feasible for canola, the economic benefits of doing so remain uncertain because of the added cost of applying post-emergent nitrogen. / February 2008
|
Page generated in 0.0667 seconds