661 |
Cross-view learningZhang, Li January 2018 (has links)
Key to achieving more efficient machine intelligence is the capability to analysing and understanding data across different views - which can be camera views or modality views (such as visual and textual). One generic learning paradigm for automated understanding data from different views called cross-view learning which includes cross-view matching, cross-view fusion and cross-view generation. Specifically, this thesis investigates two of them, cross-view matching and cross-view generation, by developing new methods for addressing the following specific computer vision problems. The first problem is cross-view matching for person re-identification which a person is captured by multiple non-overlapping camera views, the objective is to match him/her across views among a large number of imposters. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training samples. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and/or matrix regularisation, which lead to loss of discriminative power for cross-view matching. To that end, this thesis proposes to overcome the SSS problem in subspace learning by matching cross-view data in a discriminative null space of the training data. The second problem is cross-view matching for zero-shot learning where data are drawn from different modalities each for a different view (e.g. visual or textual), versus single-modal data considered in the first problem. This is inherently more challenging as the gap between different views becomes larger. Specifically, the zero-shot learning problem can be solved if the visual representation/view of the data (object) and its textual view are matched. Moreover, it requires learning a joint embedding space where different view data can be projected to for nearest neighbour search. This thesis argues that the key to make zero-shot learning models succeed is to choose the right embedding space. Different from most existing zero-shot learning models utilising a textual or an intermediate space as the embedding space for achieving crossview matching, the proposed method uniquely explores the visual space as the embedding space. This thesis finds that in the visual space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. Moreover, a natural mechanism for multiple textual modalities optimised jointly in an end-to-end manner in this model demonstrates significant advantages over existing methods. The last problem is cross-view generation for image captioning which aims to automatically generate textual sentences from visual images. Most existing image captioning studies are limited to investigate variants of deep learning-based image encoders, improving the inputs for the subsequent deep sentence decoders. Existing methods have two limitations: (i) They are trained to maximise the likelihood of each ground-truth word given the previous ground-truth words and the image, termed Teacher-Forcing. This strategy may cause a mismatch between training and testing since at test-time the model uses the previously generated words from the model distribution to predict the next word. This exposure bias can result in error accumulation in sentence generation during test time, since the model has never been exposed to its own predictions. (ii) The training supervision metric, such as the widely used cross entropy loss, is different from the evaluation metrics at test time. In other words, the model is not directly optimised towards the task expectation. This learned model is therefore suboptimal. One main underlying reason responsible is that the evaluation metrics are non-differentiable and therefore much harder to be optimised against. This thesis overcomes the problems as above by exploring the reinforcement learning idea. Specifically, a novel actor-critic based learning approach is formulated to directly maximise the reward - the actual Natural Language Processing quality metrics of interest. As compared to existing reinforcement learning based captioning models, the new method has the unique advantage of a per-token advantage and value computation is enabled leading to better model training.
|
662 |
A avaliação do impacto de um treinamento utilizando Propensity Score Matching : uma abordagem não-paramétrica e semiparamétricaSilveira, Luiz Felipe de Vasconcellos January 2015 (has links)
O objetivo dessa dissertação é avaliar o impacto de um programa de treinamento voltado para trabalhadores, utilizando o propensity score matching, mas com dois tipos de abordagem, uma não-paramétrica e a outra semi-paramétrica. Para estimação não paramétrica foi utilizado um método proposto por Li, Racine e Wooldridge (2009) e para estimação semi-paramétrica, o modelo utilizado foi o Generalized Additive Model proposto por Hastie e Tibshirani (1990). Os resultados obtidos indicam que os dois métodos utilizados apresentam estimativas tão boas ou melhores do que quando estimadas paramétricamente. / The goal of this thesis is to evaluate the impact of a job training program using propensity score matching methods with two types of approaches: a nonparametric e another semiparametric. For non-parametric estimation was used a method proposed by Li, Racine and Wooldridge (2009) and for the semiparametric model the Generalized Additive Model proposed by Hastie and Tibshirani (1990). The results indicate that both methods provide estimates as good or better than when parametrically estimated.
|
663 |
The Stable Marriage Problem : Optimizing Different Criteria Using Genetic AlgorithmsDamianidis, Ioannis January 2011 (has links)
“The Stable marriage problem (SMP) is basically the problem of finding a stable matchingbetween two sets of persons, the men and the women, where each person in every group has a listcontaining every person that belongs to other group ordered by preference. The first ones to discovera stable solution for the problem were D. Gale and G.S. Shapley. Today the problem and most of itsvariations have been studied by many researchers, and for most of them polynomial time algorithmsdo not exist. Lately genetic algorithms have been used to solve such problems and have oftenproduced better solutions than specialized polynomial algorithms. In this thesis we study and showthat the Stable marriage problem has a number of important real-world applications. It theexperimentation, we model the original problem and one of its variations and show the benefits ofusing genetic algorithms for solving the SMP.” / Program: Magisterutbildning i informatik
|
664 |
Correlation of resin cement shades to their corresponding try-in pastesJastaneiah, Wid 24 October 2018 (has links)
OBJECTIVES: The aims of this study are to determine the shade correlation between try-in pastes with their corresponding resin cements. Also, to investigate the effect of resin cement shades and various ceramic thicknesses, shades, and translucency in the final color outcome over tooth-shaded backgrounds.
MATERIALS AND METHOD: Lithium Disilicate CAD/CAM blocks (IPS e.max CAD) were prepared, in high and low translucency, in two different shades (A1 and A3), and in 2 different thicknesses (0.53 ± 0.02 mm and 0.83 ± 0.02 mm). Four different tooth-shaded backgrounds (ND2, ND5, ND8, and ND9) were prepared from acrylic resin in a standard thickness of 6.610 mm to achieve complete opacity. RelyX veneer cement and its corresponding try-in paste in three different shades, Transparent (TR), White Opaque (WO), and Bleached Opaque (BO), in a thickness of (80 ± 5 μm) were used. For each combination, the color was measured with a spectrophotometer to calculate the color difference (ΔΕ value) in reference to ceramic veneer, and the differences of ΔΕ among the specimens were compared statistically using JMP Pro 13. Analysis was performed for 3 aims, (1) to compare the ability of ceramic to mask the aspect of the abutment in relation to its thickness (0.5 and 0.8) mm, transparency (HT and LT) and shade (A1 and A3), (2) effect of a change in cement color (TR, WO, and BO) on the final color of the ceramic; and (3) to determine the correlation between try-in pastes with their corresponding resin cements.
RESULT: A significant difference was found with a p-value of <.0001 for the following factors: Stump Shade, ceramic thickness, cured cement, ceramic shade, cement type and for the interactions of cured cement with cement type, and stump shade with ceramic transparency. Also, a significant difference was found with ceramic transparency with a p-value 0.0476. While cured cement and its corresponding try-in paste showed a significant difference in color masking (p <.0001) shade White Opaque cement and shade White Opaque try-in paste exhibited insignificant color change outcome with a p-value of 0.8051.
CONCLUSION: RelyX veneer cements shades (Translucent and Bleached Opaque) have lower masking ability than White Opaque cement. RelyX veneer Try in paste is much less effective in masking than its corresponding resin cement. The only correlation between RelyX veneer cements with their corresponding try-in pastes among the shades tested (White Opaque, Translucent and Bleached Opaque) is with shade White Opaque. This study demonstrated that underlying tooth abutment color, cement color, and ceramic thickness, shade and translucency all influence the resulting optical color of CAD/CAM glass-ceramic lithium disilicate-reinforced restorations. / 2020-10-24T00:00:00Z
|
665 |
Impacto do Programa Bolsa Família sobre as decisões de trabalho das crianças: uma análise utilizando os microdados da PNAD / Impact of the conditional cash transfer Bolsa Familia on the decisions of child labor: an analysis using microdata from PNADNascimento, Adriana Rosa do 16 January 2014 (has links)
O trabalho infantil foi amplamente estudado sob diferentes enfoques como seus efeitos sobre a educação, seus determinantes, trabalho infantil e saúde, persistência do trabalho infantil entre gerações e avaliações de impacto de programas sociais sobre o trabalho infantil. No entanto, ainda se verifica incidência de trabalho infantil no Brasil.Em 2011, 4,33% das crianças entre 5 e 15 anos trabalhavam ou exerciam atividade para consumo próprio ou construção para uso próprio sendo a agropecuária, pesca e silvicultura o ramo de atividade que mais emprega a mão-de-obra infantil. A maior parte das crianças trabalhadoras em 2011 eram meninos de 11 a 15 anos, mais de 10% das crianças deste gênero nesta faixa etária. A jornada média de trabalho ultrapassa 20 horas semanas sendo que 4,34% das crianças brasileiras trabalhadoras também estudam. Além de tecer um panorama atualizado da situação do trabalho infantil no Brasil, o objetivo principal deste trabalho é examinar o impacto do Programa Bolsa Família sobre o trabalho infantil. O impacto foi estimado através do método de Propensity Score Matching. Também se estimaram regressões utilizando, alternativamente, a renda proveniente de transferência social como variável explicativa. Os dados utilizados são os da Pesquisa Nacional por Amostra de Domicílios (PNAD) de 2011 e 2009. O resultado mostra que o programa não tem impacto significativo sobre a probabilidade de a criança trabalhar e sobre as horas trabalhadas. Adicionalmente, observa-se que a renda proveniente de transferência social tem impacto negativo e significativo sobre a probabilidade de a criança trabalhar e sobre o número de horas trabalhadas. Assim, embora a participação no programa não tenha impacto sobre o trabalho infantil, a renda proveniente do programa atua no sentido de reduzir a probabilidade de a criança trabalhar e o número de horas trabalhadas. / Child labor has been widely studied under various approaches and their effects on education, its determinants, child labor and health, the persistence of child labor across generations and impact evaluations of social programs on child labor. However, there is still the incidence of child labor in Brazil. In 2011, 4.33% of children between 5 and 15 years were working or engaged in activity for own consumption or building for their own use and the field of activity that employs more children is agriculture, fishing and forestry. Most working children in 2011 were boys 11-15 years old, over 10 % of children in this age group and gender. The average hours worked per week exceeds 20 hours and 4.34% of Brazilian children workers also studied. Besides presenting an overview of the current of the situation of child labor in Brazil, the main goal of this paper is to examine the impact of the conditional cash transfer program Bolsa Família on child labor. The impact was estimated by the method of propensity score matching. Also was estimated regressions using, alternatively, income from conditional cash transfers as an explanatory variable. The data used are from the Pesquisa Nacional por Amostra de Domicílio (PNAD), the Brazilian annual household survey of 2011 and 2009. The results show that the program has no significant impact on the probability of work and on hours worked. Additionally, it is observed that the income from social cash transfer has negative and significant impact on the probability of child work and on the number of hours worked. Thus, although participation in the program has no impact on child labor, the income from the program acts to reduce the probability of child labor and the number of hours worked.
|
666 |
Mechanism design for complex systems: bipartite matching of designers and manufacturers, and evolution of air transportation networksJoseph D. Thekinen (5930327) 20 December 2018 (has links)
<div>A central issue in systems engineering is to design systems where the stakeholders do not behave as expected by the systems designer. Usually, these stakeholders have different and often conflicting objectives. The stakeholders try to maximize their individual objective and the overall system do not function as expected by the systems designers.</div><div><br></div><div><div>We specifically study two such systems- a) cloud-based design and manufacturing system (CBDM) and b) Air Transportation System (ATS). In CBDM, two stakeholders</div><div>with conflicting objectives are designers trying to get their parts printed at the lowest possible price and manufacturers trying to sell their excess resource capacity at maximum prots. In ATS, on one hand, airlines make route selection decision with the goal of maximizing their market share and prots and on the other hand regulatory bodies such as Federal Aviation Administration tries to form policies that increase overall welfare of the people.</div></div><div><br></div><div><div>The objective in this dissertation is to establish a mechanism design based framework: a) for resource allocation in CBDM, and b) to guide the policymakers in channeling the evolution of network topology of ATS.</div></div><div><br></div><div><div>This is the rst attempt in literature to formulate the resource allocation in CBDM as a bipartite matching problem with designers and manufacturers forming two distinct set of agents. We recommend best mechanisms in different CBDM scenarios like totally decentralized scenario, organizational scenario etc. based on how well the properties of the mechanism meet the requirements of that scenario. In addition to analyzing existing mechanisms, CBDM offers challenges that are not addressed in the literature. One such challenge is how often should the matching mechanism be implemented when agents interact over a long period of time. We answer this question through theoretical propositions backed up by simulation studies. We conclude that a matching period equal to the ratio of the number of service providers to the arrival rate of designers is optimal when service rate is high and a matching period equal to</div><div>the ratio of mean printing time to mean service rate is optimal when service rate is low.</div></div><div><br></div><div><div>In ATS, we model the evolution of the network topology as the result of route selection decisions made by airlines under competition. Using data from historic decisions we use discrete games to model the preference parameters of airlines towards explanatory variables such as market demand and operating cost. Different from the existing literature, we use an airport presence based technique to estimate these parameters. This reduces the risk of over-tting and improves prediction accuracy. We conduct a forward simulation to study the effect of altering the explanatory variables on the Nash equilibrium strategies. Regulatory bodies could use these insights while forming policies.</div></div><div><br></div><div><div>The overall contribution in this research is a mechanism design framework to design complex engineered systems such as CBDM and ATS. Specically, in CBDM a matching mechanism based resource allocation framework is established and matching mechanisms are recommended for various CBDM scenarios. Through theoretical and</div><div>simulation studies we propose the frequency at which matching mechanisms should be implemented in CBDM. Though these results are established for CBDM, these</div><div>are general enough to be applied anywhere matching mechanisms are implemented multiple times. In ATS, we propose an airport presence based approach to estimate</div><div>the parameters that quantify the preference of airlines towards explanatory variables.</div></div>
|
667 |
Color Lines, and Regions and Their Stereo Matching / Lignes et régions couleurs et leur appariement stéréoLertchuwongsa, Noppon 13 December 2011 (has links)
En vision par ordinateur, les points saillants sont des caractéristiques essentielles aux algorithmes. Les performances dépendent de paramètres extérieurs (ex. illumination). Les mesures de similarité sont centrales à la reconnaissance. Pour assurer l'efficacité de traitement, les caractéristiques extraites d'une image doivent être stables, et la mesure de similarité doit les distinguer parfaitement.Dans cette thèse, des caractéristiques conjointes géométrie/couleur sont étudiées : lignes de couleur et régions. Elles fondent la détection d'une troisième, la profondeur, qui sert en retour à évaluer leurs performanceLes lignes sont des extensions des classiques lignes de niveau: l'espace couleur 3-D est projeté sur un espace 1-D adapté pour résumer l'information chromatique là où elle est adéquate,Les régions exploitent classiquement la connexité image mais jointe à une compacité dans l'histogramme bidimensionnel issu du modèle dichromatique. L'homogénéité ainsi définie garantit une robustesse a priori aux variations d'éclairage en séparant la couleur de l'intensité et les couleurs entre elles.Cette homogénéité est exploitée selon 2 méthodes d'extraction d'ensembles compacts autour des modes de l'histogramme: extraction analytique des extrema locaux de couleur, extraction de ces mêmes extrema contrôlée par les régions correspondantes de l'image.Pour la profondeur, trois calculs de disparité stéréoscopique sont proposés et les performances comparées avec la réalité terrain:1. Appariement de lignes couleur avec une distance de Hausdorff revisitée.2. Exploitation de la forme des histogrammes de disparité par régions3.Coopération entre appariement de points et de régions. / In computer vision, salient points are essential features to algorithms. Performances depend on external parameters (e.g. illuminant). Similarity measures are central to recognition.To secure the processing efficiency, extracted features have to be stable enough, and the similarity measure needs to perfectly distinguish between them.In the thesis, joint geometrical and color features are studied: color lines and regions. They found the detection of a third one, range, that helps in turn to assess their goodness.Color lines are extensions of classical level lines: the 3 D color space is mapped onto a 1 D scale especially designed to retain the chromatic information where it is suitable.Regions require the usual image connectivity but in association with compactness in the bi-dimensional histogram stemming from the dichromatic model. The so-designed homogeneity is granting an a priori good robustness against illumination variations in separating the body colors and splitting color from intensity.The latter homogeneity gives raise to 2 methods for extracting compact sets around histogram modes: color first analysis (an analytic extraction of color local extrema) , and joint color/space analysis (same but controlled by the region growing).As for depth, 3 methods to compute the stereo disparity are proposed for their results to be confronted with the ground-truth:1. Color line matching based on a modified Hausdorff distance,2. Studying the shape of the disparity histogram between regions,3. Cooperation between pixel correlation and region matching.The robustness of the designed features is proved on several stereo pairs. Future work deals with improving efficacy and accuracy.
|
668 |
Graph Partitioning for the Finite Element Method: Reducing Communication Volume with the Directed Sorted Heavy Edge MatchingGonzález García, José Luis 02 May 2019 (has links)
No description available.
|
669 |
Structural and shape reconstruction using inverse problems and machine learning techniques with application to hydrocarbon reservoirsEtienam, Clement January 2019 (has links)
This thesis introduces novel ideas in subsurface reservoir model calibration known as History Matching in the reservoir engineering community. The target of history matching is to mimic historical pressure and production data from the producing wells with the output from the reservoir simulator for the sole purpose of reducing uncertainty from such models and improving confidence in production forecast. Ensemble based methods such as the Ensemble Kalman Filter (EnKF) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA) as been proposed for history matching in literature. EnKF/ES-MDA is a Monte Carlo ensemble nature filter where the representation of the covariance is located at the mean of the ensemble of the distribution instead of the uncertain true model. In EnKF/ES-MDA calculation of the gradients is not required, and the mean of the ensemble of the realisations provides the best estimates with the ensemble on its own estimating the probability density. However, because of the inherent assumptions of linearity and Gaussianity of petrophysical properties distribution, EnKF/ES-MDA does not provide an acceptable history-match and characterisation of uncertainty when tasked with calibrating reservoir models with channel like structures. One of the novel methods introduced in this thesis combines a successive parameter and shape reconstruction using level set functions (EnKF/ES-MDA-level set) where the spatial permeability fields' indicator functions are transformed into signed distances. These signed distances functions (better suited to the Gaussian requirement of EnKF/ES-MDA) are then updated during the EnKF/ES-MDA inversion. The method outperforms standard EnKF/ES-MDA in retaining geological realism of channels during and after history matching and also yielded lower Root-Mean-Square function (RMS) as compared to the standard EnKF/ES-MDA. To improve on the petrophysical reconstruction attained with the EnKF/ES-MDA-level set technique, a novel parametrisation incorporating an unsupervised machine learning method for the recovery of the permeability and porosity field is developed. The permeability and porosity fields are posed as a sparse field recovery problem and a novel SELE (Sparsity-Ensemble optimization-Level-set Ensemble optimisation) approach is proposed for the history matching. In SELE some realisations are learned using the K-means clustering Singular Value Decomposition (K-SVD) to generate an overcomplete codebook or dictionary. This dictionary is combined with Orthogonal Matching Pursuit (OMP) to ease the ill-posed nature of the production data inversion, converting our permeability/porosity field into a sparse domain. SELE enforces prior structural information on the model during the history matching and reduces the computational complexity of the Kalman gain matrix, leading to faster attainment of the minimum of the cost function value. From the results shown in the thesis; SELE outperforms conventional EnKF/ES-MDA in matching the historical production data, evident in the lower RMS value and a high geological realism/similarity to the true reservoir model.
|
670 |
Investigation of a complex conjugate matching circuit for a piezoelectric energy harvesterKu Ahamad, Ku Nurul Edhura January 2018 (has links)
The work described in this thesis is aimed at developing a novel piezoelectric cantilever energy harvesting circuit, so that more energy can be obtained from a particular piezoelectric harvester than is possible using conventional circuits. The main focus of the work was to design, build and test a proof of principle system, and not a commercial version, so as to determine any limitations to the circuit. The circuit functions by cancelling the capacitive output reactance of the piezoelectric harvester with a simulated inductance, and is based on an idea proposed by Qi in 2011. Although Qi's approach demonstrated that the circuit could function, the system proved too lossy, and so a less lossy version is attempted here. Experimental and software simulations are provided to verify the theoretical predictions. A prototype amplified inductor circuit was simulated and tested. From the simulation results, although harmonic current losses were found in the circuit, it was found that the circuit should produce an amplified effective inductance and a maximum output power of 165mW. The effective inductance is derived from the voltage across the 2H inductor, and this voltage is amplified and applied to the circuit via an inverter, to provide an extra simulated inductance, so that the overall inductance can be resonated with the piezoelectric harvester output capacitance. Hence the capacitive impedance of the harvester is nearly cancelled. The study and analysis of the amplified inductor circuit was carried out for a single cantilever harvester. Both open loop and closed loop testing of the system were carried out. The open loop test showed that the concept should function as predicted. The purpose of the closed loop test was to make the system automatically adjust for different resonance frequencies. The circuit was tested at 52Vpp inverter output voltage, and demonstrated a harvested power of 145.5mW. Experimental results show that the harvester output power is boosted from 8.8mW as per the manufacturer data sheet to 145.5mW (16.5 times). This is approximately double the power available using circuits described in the literature.
|
Page generated in 0.0327 seconds