211 |
Efeito do ensino de palavras monossilábicas via treino de relações condicionais arbitrárias sobre o controle por unidades mínimas em leitura recombinativa / Effect of teaching monosyllabic words via arbitrary conditional relations on the minimal control units in recombinative readingAriene Coelho Souza 25 June 2009 (has links)
O procedimento de discriminação condicional Matching - to - Sample é utilizado em estudos que se propõem a investigar experimentalmente as relações envolvidas no ler e, mais especificamente, de verificar as possibilidades de emergência de novas relações a partir das que foram diretamente treinadas . No entanto, para uma leitura proficiente é necessário que o aprendiz esteja sob controle de unidades menores do que a palavra, a fim de que o comportamento de ler sob controle das unidades mínimas possa emergir. A maioria das pesquisas nesta área tem sido realizadas a partir do treino e teste recombinativo de palavras inteiras. A leitura sob controle das unidades mínimas, nestes estudos, é exibida geralmente depois do treino de pelo menos três conjuntos de palavras. O objetivo do presente trabalho foi investigar a possibilidade de um aumento na velocidade de aquisição do controle pelas unidades menores e redução da variabilidade no desempenho em leitura recombinativa partindo do treino direto de um repertório de quatro palavras monossilábicas. Foram feitos dois experimentos: no primeiro, participaram quatro crianças, com idades entre 3 anos e 10 meses e 5 anos. Os estímulos experimentais originais (conjunto ABC) foram NO, PE, PA e LU e os estímulos experimentais derivados (conjunto ABC) foram LUPA, PANO, PAPA e LULU. Os resultados demonstraram que os quatro participantes apresentaram variabilidade no desempenho e não houve emergência da leitura recombinativa para nenhum deles. Foi realizado então o segundo experimento a partir de manipulações de variáveis que possivelmente interferiram no desempenho dos participantes. Neste segundo experimento, participaram três crianças, das quatro que foram expostas ao experimento anterior. Os estímulos experimentais originais (conjunto ABC) foram BO, BA, LO e LA e os derivados (conjunto ABC) foram BOBA, BABO, LOLA e LALO. Como resultados, dois dos três participantes exibiram leitura recombinativa e a variabilidade no desempenho destes participantes se mostrou menor do que nos estudos anteriores. Concluiu-se desta forma, que a partição do estímulos é uma variável importante para a independência funcional das sílabas e posterior emergência da leitura recombinativa. Assim, o treino monossilábico se mostrou eficaz para aumentar a velocidade de aquisição e diminuir a variablidade no desempenho em leitura recombinativa para os participantes deste estudo. / The procedure of conditional discrimination Matching - to - Sample is used in studies that experimentally investigate the relationship involved in \"reading\" and, more specifically, examine the possibilities of emergence of new relationships from which they were directly trained. However for a proficient reading it is necessary to read under control of smaller units than the word, so that the reading behavior under the control by minimal units could emerge. The majority of researches in this area have been carried out with training and recombinative testing of the whole words. Reading behavior under control by the minimal units in these studies generally appears after the training of at least three sets of words. The objective of this study therefore was to investigate the possibility of an increase in speed of acquisition of control by smaller units and to reduce variabilitys performance in recombinative reading through the direct training of a repertoire of four monosyllabic words. Two experiments were conducted: in the first one, four children were from three years and ten months old to five years old. The originals experimental stimuli (all ABC) was NO, PE, PA and LU and the deriveds experimental stimuli (all A\'B\'C \') were LUPA, PANO, PAPA and LULU. The results showed that all four participants showed variability in performance and no emergence of recombinative reading . A second experiment were carried out withthe manipulation of variables that possibly interfered in the performance of the participants. In this second experiment three children were involved, four of which were exposed to the previous experiment. The original experimental stimuli (all ABC) were BO, BA, LA and LO and derived (all A\'B\'C \') were BABO, BOBA, LALO and LOLA. As a result, two of the three participants showed recombinative reading and the variability in performance of these participants wassmaller than in previous studies. The partition of the stimulus was considered an important variable for the functional independence of syllables and subsequent emergence of recombinative reading. Thus, the monosyllabic training was effective to increase the speed of acquisition and to reduce the variability in performance of recombinative reading for participants.
|
212 |
Variáveis relevantes para a emergência de simetria em pombos em procedimento de matching-to-sample sucessivo / Relevant variables for emergence of symmetry in pigeons in successive matching-to-sample procedureViviane Verdu Rico 07 May 2010 (has links)
Treinos discriminativos específicos podem favorecer que um organismo responda consistentemente sob controle de relações entre estímulos que não foram diretamente ensinadas. Diz-se, então, que estas são relações emergentes. Caso tais relações estejam de acordo com as propriedades de reflexividade, simetria e transitividade, é constatada a formação de classes de estímulos equivalentes. Nas últimas décadas, diversos estudos vêm tentando demonstrar a formação de classes de estímulos equivalentes em animais não humanos, mas poucos têm tido sucesso. Dentre as propriedades definidoras da equivalência, a simetria tem sido a mais difícil de ser observada. Ao identificarem variáveis relacionadas aos resultados inconsistentes de estudos envolvendo testes de simetria, Frank e Wasserman (2005) planejaram um experimento com pombos, no qual tentativas de relações de identidade e arbitrárias eram apresentadas em uma mesma sessão, que resultou em desempenho positivo nos testes. O objetivo do presente trabalho foi identificar algumas das variáveis que possivelmente contribuíram para os resultados obtidos no referido estudo. Para tanto, foram realizados dois experimentos, com três pombos cada, para verificar: 1) a replicabilidade dos dados obtidos por Frank e Wasserman (2005); 2) se o treino de reversibilidade de combinações negativas é um fator importante na obtenção de simetria emergente com este procedimento. Os sujeitos foram ensinados a bicar estímulos apresentados isoladamente na tela de um computador em uma tarefa de matching sucessivo. O Experimento I consistiu no treino misto das relações AA, BB e AB e no teste de simetria BA. O Experimento II era semelhante ao anterior, exceto que, para que não ocorresse o treino de reversibilidade das combinações negativas, foram acrescentadas as relações CC no treino. Apenas dois pombos, um de cada experimento, apresentaram responder discriminado no treino. Ambos apresentaram desempenho semelhante ao do estudo de Frank e Wasserman (2005), o que indicaria emergência de simetria. Entretanto, uma análise mais detalhada do desempenho destes dois pombos revelou um responder instável entre as sessões de teste. Os outros quatro sujeitos não apresentaram responder discriminado apesar do elevado número de sessões (entre 65 e 220). A análise da distribuição das respostas ao longo do tempo de apresentação dos estímulos indicou diferenças entre o responder dos pombos que concluíram e que não concluíram o treino. Estes últimos apresentaram um responder marcado por longas pausas entre respostas, com menor freqüência de respostas para o estímulo modelo. Os pombos que concluíram a fase de treino apresentaram um responder constante, com poucas e curtas pausas, com maior freqüência de respostas diante do modelo do que diante dos estímulos de comparação. Os resultados dos testes de simetria indicam que o treino de reversão das combinações negativas não foi um fator relacionado à emergência de simetria com este procedimento. O fato de que a maioria dos sujeitos não aprendeu as relações treinadas, bem como os diferentes padrões de responder apresentados pelos sujeitos ao longo do treino e o desempenho instável nas sessões de teste, indicam a necessidade de refinamento do procedimento, buscando favorecer a aprendizagem e produzir estabilidade nos testes / Specific conditions in the discriminative training should set conditions for an organism to respond controlled by stimuli relations, not directly trained. These stimuli relations are said to be emergent. If these relations are reflexive, symmetric and transitive, than classes of equivalent stimuli were established. In the last decades, several studies tried to demonstrate the formation of equivalence classes in non-humans animals, but a few succeed. From among those properties that define equivalence relations, symmetry is the most hardly observed. Once identified some variables involved with the discrepant results in the symmetry tests, Frank and Wasserman (2005) designed an experiment with pigeons, where identity and arbitrary trials were presented altogether in the training sessions. This procedure resulted in positive outcomes in symmetry tests. The purpose in this study was to assess some of the variables that possibly contributed to Frank and Wassermans (2005) positive results. Two experiments were conducted in order to verify 1) Frank and Wasserman (2005) data replicability; 2) if training the negative combinations reversals is relevant in producing emergent symmetry by this procedure. Tree pigeons participated in each one of them. The pigeons were trained to peak stimuli presented alone in a computer screen in a successive matching task. Experiment I consisted in mixed training of AA, BB and AB relations followed by symmetry tests for BA relation. Experiment II was identical to the previous one, except that CC relations were added in order to avoid the negative relations reversal training. Only two pigeons, one of each experiment, reach training criterion. Their performances in symmetry tests were quite similar to those presented by Frank and Wasserman (2005) indicating that training resulted in symmetry relations. Meanwhile, a careful analysis of these symmetric performances in the testes revealed to be unstable from session to session. The four remaining pigeons did not reached training criterion despite of been exposed to high number of training sessions (about 65 to 220 sessions). Differences in distribution of responses over stimuli presentation interval were observed comparing data of the two pigeons that reached training criterion and pigeons that did not reach training criterion. Low frequency of response and long inter-response interval to the sample stimulus characterized the performances of the four pigeons whose training criterion has never been reached. For the pigeons whose training criterion was reached the responses occurred constantly, with few and short inter-responses intervals and high frequency of responding to the sample stimulus when compared to the comparison stimulus. The symmetry tests results suggests that training the negative combinations reversals did not affected the emergence of symmetric relations in this procedure. The fact that most of the pigeons did not reached training criterion, along with the different response patterns shown by each pigeon in the training task and the unstable performance observed in the tests indicates that procedure refining is needed in order to improve learning to criterion and produce stability during tests.
|
213 |
影响融资租赁公司与承租中小企业的双向匹配的因素January 2019 (has links)
abstract: 本研究旨在讨论融资租赁公司与承租的中小企之间的匹配因素。研究从融资租赁的实际业务流程切入,研究1对H公司进行了案例分析,得到基本的影响因素结果,继而研究2和研究3分别在中小企客户和融资租赁公司两类资料中独立展开分析,并比较这些因素的影响程度。研究结果发现了影响双向匹配的四个维度,以及在各自影响力的不同。研究最后分别对融资租赁公司和承租中小企提出了建议,以期提高双方匹配并达成业务的概率。 / Dissertation/Thesis / Doctoral Dissertation Business Administration 2019
|
214 |
Contraintes d'anti-filtrage et programmation par réécritureKopetz, Radu 15 October 2008 (has links) (PDF)
L'objectif principal de cette thèse est l'étude et la formalisation de nouvelles constructions permettant d'augmenter l'expressivité du filtrage et des langages à base de règles en général. Ceci est motivé par le développement de Tom, un système qui enrichit les langages impératifs comme Java et C avec des constructions de haut niveau comme le filtrage et les stratégies. Une première extension que l'on propose est la notion d'anti-patterns, i.e. des motifs qui peuvent contenir des symboles de complément. La négation est intrinsèque au raisonnement habituel, et la plupart du temps quand on cherche quelque chose, on base nos motifs sur des conditions positives et négatives. Cela doit naturellement se retrouver dans les logiciels permettant les recherches à base de motifs. Par exemple, les anti-patterns permettent de spécifier qu'on cherche des voitures blanches qui ne sont pas des monospaces, ou qu'on cherche une liste d'objets qui ne contient pas deux éléments identiques. Nous définissons alors de manière formelle la sémantique des anti-patterns dans le cas syntaxique, i.e. quand les symboles n'ont aucune théorie associée, et aussi modulo une théorie équationnelle arbitraire. Puis nous étendons la notion classique de filtrage entre les motifs et les termes clos au filtrage entre les anti-patterns et les termes clos (anti-filtrage). S'inspirant de l'expressivité des règles de production, nous proposons plusieurs extensions aux constructions de filtrage fournies par Tom. Par conséquent, la condition pour l'application d'une règle n'est plus une simple contrainte de filtrage, mais une combinaison (conjonction ou disjonction) de contraintes de filtrage et d'anti-filtrage ainsi que d'autres types de conditions. Les techniques classiques de compilation du filtrage ne sont pas bien adaptées à ces conditions complexes. Ceci a motivé l'étude d'une nouvelle méthode de compilation basée sur des systèmes de réécriture. L'application de ces systèmes est contrôlée par des stratégies, permettant la mise en place d'extensions futures (comme la prise en compte de nouvelles théories de filtrage) de manière simple et naturelle, sans interférer avec le code existant. Nous avons complètement réécrit le compilateur de Tom en utilisant cette technique. Une fois tous ces éléments rassemblés, on obtient un environnement pour décrire et implémenter des transformations de manière élégante et concise. Pour promouvoir son utilisation dans des projets complexes du milieu industriel, nous développons une technique pour extraire de manière automatique des informations structurelles à partir d'une hiérarchie arbitraire de classes Java. Cela permet l'intégration du filtrage offert par Tom dans n'importe quelle application Java, nouvelle ou déjà existante.
|
215 |
Multiple graph matching and applicationsSolé Ribalta, Albert 11 July 2012 (has links)
En aplicaciones de reconocimiento de patrones, los grafos con atributos son en gran medida apropiados. Normalmente, los vértices de los grafos representan partes locales de los objetos i las aristas relaciones entre estas partes locales. No obstante, estas ventajas vienen juntas con un severo inconveniente, la distancia entre dos grafos no puede ser calculada en un tiempo polinómico. Considerando estas características especiales el uso de los prototipos de grafos es necesariamente omnipresente. Las aplicaciones de los prototipos de grafos son extensas, siendo las más habituales clustering, clasificación, reconocimiento de objetos, caracterización de objetos i bases de datos de grafos entre otras. A pesar de la diversidad de aplicaciones de los prototipos de grafos, el objetivo del mismo es equivalente en todas ellas, la representación de un conjunto de grafos. Para construir un prototipo de un grafo todos los elementos del conjunto de enteramiento tienen que ser etiquetados comúnmente. Este etiquetado común consiste en identificar que nodos de que grafos representan el mismo tipo de información en el conjunto de entrenamiento. Una vez este etiquetaje común esta hecho, los atributos locales pueden ser combinados i el prototipo construido. Hasta ahora los algoritmos del estado del arte para calcular este etiquetaje común mancan de efectividad o bases teóricas. En esta tesis, describimos formalmente el problema del etiquetaje global i mostramos una taxonomía de los tipos de algoritmos existentes. Además, proponemos seis nuevos algoritmos para calcular soluciones aproximadas al problema del etiquetaje común. La eficiencia de los algoritmos propuestos es evaluada en diversas bases de datos reales i sintéticas. En la mayoría de experimentos realizados los algoritmos propuestos dan mejores resultados que los existentes en el estado del arte. / In pattern recognition, the use of graphs is, to a great extend, appropriate and advantageous. Usually, vertices of the graph represent local parts of an object while edges represent relations between these local parts. However, its advantages come together with a sever drawback, the distance between two graph cannot be optimally computed in polynomial time. Taking into account this special characteristic the use of graph prototypes becomes ubiquitous. The applicability of graphs prototypes is extensive, being the most common applications clustering, classification, object characterization and graph databases to name some. However, the objective of a graph prototype is equivalent to all applications, the representation of a set of graph. To synthesize a prototype all elements of the set must be mutually labeled. This mutual labeling consists in identifying which nodes of which graphs represent the same information in the training set. Once this mutual labeling is done the set can be characterized and combined to create a graph prototype. We call this initial labeling a common labeling. Up to now, all state of the art algorithms to compute a common labeling lack on either performance or theoretical basis. In this thesis, we formally describe the common labeling problem and we give a clear taxonomy of the types of algorithms. Six new algorithms that rely on different techniques are described to compute a suboptimal solution to the common labeling problem. The performance of the proposed algorithms is evaluated using an artificial and several real datasets. In addition, the algorithms have been evaluated on several real applications. These applications include graph databases and group-wise image registration. In most of the tests and applications evaluated the presented algorithms have showed a great improvement in comparison to state of the art applications.
|
216 |
Matching structure and Pfaffian orientations of graphsNorine, Serguei 20 July 2005 (has links)
The first result of this thesis is a generation theorem for
bricks. A brick is a 3-connected graph such that the graph
obtained from it by deleting any two distinct vertices has a
perfect matching. The importance of bricks stems from the fact
that they are building blocks of a decomposition procedure of
Kotzig, and Lovasz and Plummer. We prove that every brick except
for the Petersen graph can be generated from K_4 or the prism by
repeatedly applying certain operations in such a way that all the
intermediate graphs are bricks. We use this theorem to prove an
exact upper bound on the number of edges in a minimal brick with
given number of vertices and to prove that every minimal brick has
at least three vertices of degree three.
The second half of the thesis is devoted to an investigation of
graphs that admit Pfaffian orientations. We prove that a graph
admits a Pfaffian orientation if and only if it can be drawn in
the plane in such a way that every perfect matching crosses
itself even number of times. Using similar techniques, we give a
new proof of a theorem of Kleitman on the parity of crossings and
develop a new approach to Turan's problem of estimating crossing
number of complete bipartite graphs.
We further extend our methods to study k-Pfaffian graphs and
generalize a theorem by Gallucio, Loebl and Tessler. Finally, we
relate Pfaffian orientations and signs of edge-colorings and prove
a conjecture of Goddyn that every k-edge-colorable k-regular
Pfaffian graph is k-list-edge-colorable. This generalizes a
theorem of Ellingham and Goddyn for planar graphs.
|
217 |
Algorithms for Sequence Similarity MeasuresMOHAMAD, Mustafa Amid 17 November 2010 (has links)
Given two sets of points $A$ and $B$ ($|A| = m$, $|B| = n$), we seek to find a minimum-weight many-to-many matching which seeks to match each point in $A$ to at least one point in $B$ and vice versa. Each matched pair (an edge) has a weight. The goal is to find the matching that minimizes the total weight. We study two kinds of problems depending on the edge weight used. The first edge weight is the Euclidean distance, $d_1$. The second is edge weight is the square of the Euclidean distance, $d_2$. There already exists an $O(k\log k)$ algorithm for $d_1$, where $k=m+n$. We provide an $O(mn)$ algorithm for the $d_2$ problem. We also solve the problem of finding the minimum-weight matching when the sets $A$ and $B$ are allowed to be translated on the real line. We present an $O(mnk \log k)$ algorithm for the $d_1$ problem and an $O(3^{mn})$ algorithm for the $d_2$. Furthermore, we also deal with the special case where $A$ and $B$ lie on a circle of a specific circumference. We present an $O(k^2 \log k)$ algorithm and $O(kmn)$ algorithm for solving the minimum-weight matching for the $d_1$, and $d_2$ weights respectively. Much like the problem on the real line, we extend this problem to allow the sets $A$ and $B$ to be rotated on the circle. We try to find the minimum-weight many-to-many matching when rotations are allowed. For $d_1$ we present an $O(k^2mn \log k)$ algorithm and a $O(3^{mn})$ algorithm for $d_2$. / Thesis (Master, Computing) -- Queen's University, 2010-11-08 20:48:18.968
|
218 |
Automatic Generation of Trace Links in Model-driven Software DevelopmentGrammel, Birgit 02 December 2014 (has links) (PDF)
Traceability data provides the knowledge on dependencies and logical relations existing amongst artefacts that are created during software development. In reasoning over traceability data, conclusions can be drawn to increase the quality of software.
The paradigm of Model-driven Software Engineering (MDSD) promotes the generation of software out of models. The latter are specified through different modelling languages. In subsequent model transformations, these models are used to generate programming code automatically. Traceability data of the involved artefacts in a MDSD process can be used to increase the software quality in providing the necessary knowledge as described above.
Existing traceability solutions in MDSD are based on the integral model mapping of transformation execution to generate traceability data. Yet, these solutions still entail a wide range of open challenges. One challenge is that the collected traceability data does not adhere to a unified formal definition, which leads to poorly integrated traceability data. This aggravates the reasoning over traceability data. Furthermore, these traceability solutions all depend on the existence of a transformation engine.
However, not in all cases pertaining to MDSD can a transformation engine be accessed, while taking into account proprietary transformation engines, or manually implemented transformations. In these cases it is not possible to instrument the transformation engine for the sake of generating traceability data, resulting in a lack of traceability data.
In this work, we address these shortcomings. In doing so, we propose a generic traceability framework for augmenting arbitrary transformation approaches with a traceability mechanism. To integrate traceability data from different transformation approaches, our approach features a methodology for augmentation possibilities based on a design pattern. The design pattern supplies the engineer with recommendations for designing the traceability mechanism and for modelling traceability data.
Additionally, to provide a traceability mechanism for inaccessible transformation engines, we leverage parallel model matching to generate traceability data for arbitrary source and target models. This approach is based on a language-agnostic concept of three similarity measures for matching. To realise the similarity measures, we exploit metamodel matching techniques for graph-based model matching. Finally, we evaluate our approach according to a set of transformations from an SAP business application and the domain of MDSD.
|
219 |
Spatial, Temporal and Spatio-Temporal Correspondence for Computer Vision ProblemsZhou, Feng 01 September 2014 (has links)
Many computer vision problems, such as object classification, motion estimation or shape registration rely on solving the correspondence problem. Existing algorithms to solve spatial or temporal correspondence problems are usually NP-hard, difficult to approximate, lack flexible models and mechanism for feature weighting. This proposal addresses the correspondence problem in computer vision, and proposes two new spatio-temporal correspondence problems and three algorithms to solve spatial, temporal and spatio-temporal matching between video and other sources. The main contributions of the thesis are: (1) Factorial graph matching (FGM). FGM extends existing work on graph matching (GM) by finding an exact factorization of the affinity matrix. Four are the benefits that follow from this factorization: (a) There is no need to compute the costly (in space and time) pairwise affinity matrix; (b) It provides a unified framework that reveals commonalities and differences between GM methods. Moreover, the factorization provides a clean connection with other matching algorithms such as iterative closest point; (c) The factorization allows the use of a path-following optimization algorithm, that leads to improved optimization strategies and matching performance; (d) Given the factorization, it becomes straight-forward to incorporate geometric transformations (rigid and non-rigid) to the GM problem. (2) Canonical time warping (CTW). CTW is a technique to temporally align multiple multi-dimensional and multi-modal time series. CTW extends DTW by incorporating a feature weighting layer to adapt different modalities, allowing a more flexible warping as combination of monotonic functions, and has linear complexity (unlike DTW that has quadratic). We applied CTW to align human motion captured with different sensors (e.g., audio, video, accelerometers). (3) Spatio-temporal matching (STM). Given a video and a 3D motion capture model, STM finds the correspondence between subsets of video trajectories and the motion capture model. STM is efficiently and robustly solved using linear programming. We illustrate the performance of STM on the problem of human detection in video, and show how STM achieves state-of-the-art performance.
|
220 |
Wide Baseline Stereo Image Rectification and MatchingHao, Wei 01 December 2011 (has links)
Perception of depth information is central to three-dimensional (3D) vision problems. Stereopsis is an important passive vision technique for depth perception. Wide baseline stereo is a challenging problem that attracts much interest recently from both the theoretical and application perspectives. In this research we approach the problem of wide baseline stereo using the geometric and structural constraints within feature sets.
The major contribution of this dissertation is that we proposed and implemented a more efficient paradigm to handle the challenges introduced by perspective distortion in wide baseline stereo, compared to the state-of-the-art. To facilitate the paradigm, a new feature-matching algorithm that extends the state-of-the-art matching methods to larger baseline cases is proposed. The proposed matching algorithm takes advantage of both the local feature descriptor and the structure pattern of the feature set, and enhances the matching results in the case of large viewpoint change.
In addition, an innovative rectification for uncalibrated images is proposed to make wide baseline stereo dense matching possible. We noticed that present rectification methods did not take into account the need for shape adjustment. By introducing the geometric constraints of the pattern of the feature points, we propose a rectification method that maximizes the structure congruency based on Delaunay triangulation nets and thus avoid some existing problems of other methods.
The rectified stereo images can then be used to generate a dense depth map of the scene. The task is much simplified compared to some existing method because the 2D searching problem is reduced to 1D searching.
To validate the proposed methods, real world images are applied to test the performance and comparisons to the state-of-the-art methods are provided. The performance of the dense matching with respect to the changing baseline is also studied.
|
Page generated in 0.0297 seconds