• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 13
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 24
  • 18
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Raisonnement à partir d'informations structurées et hiérarchisées : application à l'information archéologique / Reasoning from structured and hierarchized information : application to archeological information

Serayet, Mariette 06 May 2010 (has links)
La thèse s'inscrit dans le cadre du projet européen VENUS. Dans ce contexte, la photogrammétrie est utilisée pour produire des relevés 3D sur les objets. La connaissance sur ces objets provient à la fois de l'archéologie sous-marine et de la photogrammétrie. Les informations étant structurées, hiérarchisées et parfois incomparables, nous nous sommes intéressés aux bases de croyance partiellement préordonnées. Par ailleurs, l'acquisition des données peut conduire à l'apparition d'incohérence. Nous avons étendu l'approche des R-ensembles à la restauration de la cohérence. Nous avons également proposé un cadre de travail pour la révision de bases de croyance partiellement préordonnées et deux relations d'inférence lexicograhique. nous avons fourni pour ces approches une mise en oeuvre en utilisant la programmation logique avec ASP. Finalement, nous avons implanté notre méthode de restauration de la cohérence dans le contexte de VENUS et nous l'avons testé sur les relevés issus du projet. / This PHD has been performed within the European VENUS project. In this context, photogrammetry is used to produced 3D survey on objects and the knowledge on studied artefacts comes from both underwater archaeology and photogrammetry. This information being structured, hierachized and sometimes incomparable, we focused on partially preordered belief bases. Data acquisition possibly leads to inconsistency. We extended the Removed Set approach to inconsistency handling. To compare subsets of formulae to remove in order to restore consistency, we introduced the lexicographic comparator. Moreover, we proposed a new framework for the revision of partially preordered belief bases and two lexicographic inference relations. We proposed an implementation stemming from ASP. Finally, we implemented our inconsistency handling method in the VENUS context and we provided an experimental study on 3D surveys of the project.
32

Formalisation et étude des explications dialectiques dans les bases de connaissances incohérentes / Formalizing and Studying Dialectical Explanations in Inconsistent Knowledge Bases

Arioua, Abdallah 17 October 2016 (has links)
Les bases de connaissances sont des bases de données déductives où la logique est utilisée pour représenter des connaissances de domaine sur des données existantes. Dans le cadre des règles existentielles, une base de connaissances est composée de deux couches : la couche de données qui représentent les connaissances factuelle et la couche ontologique qui incorpore des règles de déduction et des contraintes négatives. L’interrogation de données à l’aide des ontologies est la fonction de raisonnement principale dans ce contexte. Comme dans la logique classique, les contradictions posent un problème à l’interrogation car « d'une contradiction, on peut déduire ce qu'on veut (ex falso quodlibet) ».Récemment, des approches d’interrogation tolérantes aux incohérences ont été proposées pour faire face à ce problème dans le cadre des règles existentielles. Elles déploient des stratégies dites de réparation pour restaurer la cohérence. Cependant, ces approches sont parfois inintelligibles et peu intuitives pour l'utilisateur car elles mettent souvent en œuvre des stratégies de réparation complexes. Ce manque de compréhension peut réduire l’utilisabilité de ces approches car elles réduisent la confiance entre l'utilisateur et les systèmes qui les utilisent. Par conséquent, la problématique de recherche que nous considérons est comment rendre intelligible à l’utilisateur l’interrogation tolérantes aux incohérences. Pour répondre à cette question de recherche, nous proposons d’utiliser deux formes d’explication pour faciliter la compréhension des réponses retournées par une interrogation tolérante aux incohérences. La première est dite de niveau méta et la seconde de niveau objet. Ces deux types d’explication prennent la forme d'un dialogue entre l'utilisateur et le raisonneur au sujet des déductions retournées comme réponses à une requête donnée. Nous étudions ces explications dans le double cadre de l'argumentation fondée sur la logique et de la dialectique formelle, comme nous étudions leurs propriétés et leurs impacts sur les utilisateurs en termes de compréhension des résultats. / Knowledge bases are deductive databases where the machinery of logic is used to represent domain-specific and general-purpose knowledge over existing data. In the existential rules framework a knowledge base is composed of two layers: the data layer which represents the factual knowledge, and the ontological layer that incorporates rules of deduction and negative constraints. The main reasoning service in such framework is answering queries over the data layer by means of the ontological layer. As in classical logic, contradictions trivialize query answering since everything follows from a contradiction (ex falso quodlibet). Recently, inconsistency-tolerant approaches have been proposed to cope with such problem in the existential rules framework. They deploy repairing strategies on the knowledge base to restore consistency and overcome the problem of trivialization. However, these approaches are sometimes unintelligible and not straightforward for the end-user as they implement complex repairing strategies. This would jeopardize the trust relation between the user and the knowledge-based system. In this thesis we answer the research question: ``How do we make query answering intelligible to the end-user in presence of inconsistency?''. The answer that the thesis is built around is ``We use explanations to facilitate the understanding of query answering''. We propose meta-level and object-level dialectical explanations that take the form of a dialogue between the user and the reasoner about the entailment of a given query. We study these explanations in the framework of logic-based argumentation and dialectics and we study their properties and their impact on users.
33

A Pairwise Comparison Matrix Framework for Large-Scale Decision Making

January 2013 (has links)
abstract: A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
34

Psicofísica de percepção em tomadas de decisão: a propriedade aditiva do grau de inconsistência intertemporal e uma nova proposta para a função peso de probabilidades / Psychophysics of perception in decision making: the additive property of the intertemporal degree of inconsistency and a new proposal for the probability weighting function.

Natália Destefano 20 June 2013 (has links)
Tomadas de decisão intertemporais são influenciadas não somente pelo efeito de desconto do valor em diferentes instantes, como também pelo efeito de percepção temporal. Uma das principais dificuldades que afetam experimentos padrões envolvendo estas escolhas é a simultaneidade de ambos os efeitos no processo de desconto. Através da unificação das leis da psicofísica de percepção do atraso e da associação destas às funções de desconto de valor, propusemos uma forma generalizada para o processo de desconto intertemporal envolvendo ambos os domínios. Mostramos também que a propriedade aditiva do grau de inconsistência, grandeza obtida a partir das funções de desconto, permite discriminar a influência de cada efeito no processo de tomada de decisão. De forma similar ao proposto para escolhas intertemporais, estendemos a teoria psicofísica de percepção ao domínio das probabilidades. Adotando a perspectiva de que o atraso médio para recebimento de uma recompensa está relacionado à sua probabilidade de recebimento, obtivemos uma função de desconto probabilística que abrange os efeitos de desconto de valor e de percepção de probabilidades. Em paralelo ao desenvolvimento no domínio experimental, exploramos também os modelos teóricos (axiomáticos) que fundamentam as tomadas de decisão probabilísticas. Propusemos que a forma da função peso de probabilidades, exploradas em modelos como a teoria da utilidade esperada dependente da ordenação e a teoria do prospecto acumulado, seja representada pela função de desconto generalizada que obtivemos a partir dos modelos fenomenológicos. Neste caso, a função peso é amparada por modelos fenomenológicos de decisão e deriva da suposição de que indivíduos se comportam de forma similar frente a probabilidades e atrasos. / Intertemporal decision making are influenced not only by the effect of the reward value discount at different moments, but also by the time perception effect. One of the main difficulties that affects standard experiments involving intertemporal choices is the simultaneity of both effects on the discount. Through the unification of the psychophysical laws of delay perception and their association to the value discounting functions, we proposed a generalized process for the intertemporal discount involving both domains. We also showed that the additive property of the inconsistency degree, quantity obtained from the discount functions, allowed us to distinguish between both effects in decision making. As proposed for intertemporal choices, we extended the psychophysical perception theory to the probabilities domain. Adopting the perspective that the average delay for receiving a reward is related to the probability of receiving it, we obtained a probabilistic discount function covering value discounting and probability perception. Parallel to the experimental development, we also explored theoretical models that underlie probabilistic decision making. We proposed that the shape of the probability weighting function, explored in models such as rank-dependent expected utility theory and cumulative prospect theory, should be described by the generalized probabilistic function that we obtained from the phenomenological models. Therefore, the weighting function is supported by phenomenological models and derives from the assumption that subjects behave similarly in front of probabilities and delays.
35

Dilemas deonticos : uma abordagem baseada em relações de preferencia / Deontic dilemmas : an approach based on preference relations

Testa, Rafael Rodrigues, 1982- 12 August 2018 (has links)
Orientador: Marcelo Esteban Coniglio / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas / Made available in DSpace on 2018-08-12T04:24:47Z (GMT). No. of bitstreams: 1 Testa_RafaelRodrigues_M.pdf: 526495 bytes, checksum: 51a1bb2b8ec08b5fab8947cdac7f3ae4 (MD5) Previous issue date: 2008 / Resumo: Nosso objetivo neste trabalho é apresentar uma proposta de solução a paradoxos relacionados à lógica deôntica presentes na literatura, reunidos sob o que é chamado de dilemas deônticos - situações nas quais duas obrigações conflitantes estão presentes num mesmo sistema normativo. Situações deste tipo, quando formalizadas (em SDL - standard deontic logic - ou em outras lógicas relacionadas), levam a uma inconsistência. Nossa proposta baseia-se em relações de preferência que geram uma ferramenta de escolha dentre as duas soluções normativas conflitantes, o que evita a inconsistência e permite o pleno cumprimento do sistema. Justificativas filosóficas são fornecidas as ferramentas lógicas, bem como as suas implicações. / Abstract: The main purpouse of this dissertation is the proposal of a solution to some paradoxes related to deontic logic presented in the literature, also known as deontic dilemmas - situations in which two conflicting obligations are present in the same normative system. Such situations, when formalized (in SDL - standard deontic logic - or in other related logic), lead to inconsistency. Our proposal is based on preference relations that generate a tool of choice between the two conflicting normative solutions, which avoids the inconsistency and allows the full implementation of the system. Philosophical justifications are given for the logical tools as well as for their implications. / Mestrado / Filosofia / Mestre em Filosofia
36

Conectivos de restauração local / Local restoration connectives

Corbalán, María Inés, 1978- 05 April 2012 (has links)
Orientador: Marcelo Esteban Coniglio / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas / Made available in DSpace on 2018-08-20T11:11:02Z (GMT). No. of bitstreams: 1 Corbalan_MariaInes_M.pdf: 1281325 bytes, checksum: e3827248da48b3c7e6a632848660be7e (MD5) Previous issue date: 2012 / Resumo: O presente trabalho tem como objetivo principal definir o conceito de Conectivo de Restauração Local. Revemos diversos sistemas lógicos conhecidos na literatura sob o ângulo do novo conceito introduzido...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: The present work aims principally to de.ne the concept of Local Restoration Connective. We review known systems of logic from the point of view of such new concept...Note: The complete abstract is available with the full electronic document / Mestrado / Filosofia / Mestre em Filosofia
37

Measuring inconsistency in probabilistic knowledge bases / Medindo inconsistência em bases de conhecimento probabilístico

Glauber De Bona 22 January 2016 (has links)
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community. / Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
38

Gestion des incohérences pour l'accès aux données en présence d'ontologies / Inconsistency Handling in Ontology-Mediated Query Answering

Bourgaux, Camille 29 September 2016 (has links)
Interroger des bases de connaissances avec des requêtes conjonctives a été une préoccupation majeure de la recherche récente en logique de description. Une question importante qui se pose dans ce contexte est la gestion de données incohérentes avec l'ontologie. En effet, une théorie logique incohérente impliquant toute formule sous la sémantique classique, l'utilisation de sémantiques tolérantes aux incohérences est nécessaire pour obtenir des réponses pertinentes. Le but de cette thèse est de développer des méthodes pour gérer des bases de connaissances incohérentes en utilisant trois sémantiques naturelles (AR, IAR et brave) proposées dans la littérature et qui reposent sur la notion de réparation, définie comme un sous-ensemble maximal des données cohérent avec l'ontologie. Nous utilisons ces trois sémantiques conjointement pour identifier les réponses associées à différents niveaux de confiance. En plus de développer des algorithmes efficaces pour interroger des bases de connaissances DL-Lite incohérentes, nous abordons trois problèmes : (i) l'explication des résultats des requêtes, pour aider l'utilisateur à comprendre pourquoi une réponse est (ou n'est pas) obtenue sous une des trois sémantiques, (ii) la réparation des données guidée par les requêtes, pour améliorer la qualité des données en capitalisant sur les retours des utilisateurs sur les résultats de la requête, et (iii) la définition de variantes des sémantiques à l'aide de réparations préférées pour prendre en compte la fiabilité des données. Pour chacune de ces trois questions, nous développons un cadre formel, analysons la complexité des problèmes de raisonnement associés, et proposons et mettons en œuvre des algorithmes, qui sont étudiés empiriquement sur un jeu de bases de connaissance DL-Lite incohérentes que nous avons construit. Nos résultats indiquent que même si les problèmes à traiter sont théoriquement durs, ils peuvent souvent être résolus efficacement dans la pratique en utilisant des approximations et des fonctionnalités des SAT solveurs modernes. / The problem of querying description logic knowledge bases using database-style queries (in particular, conjunctive queries) has been a major focus of recent description logic research. An important issue that arises in this context is how to handle the case in which the data is inconsistent with the ontology. Indeed, since in classical logic an inconsistent logical theory implies every formula, inconsistency-tolerant semantics are needed to obtain meaningful answers. This thesis aims to develop methods for dealing with inconsistent description logic knowledge bases using three natural semantics (AR, IAR, and brave) previously proposed in the literature and that rely on the notion of a repair, which is an inclusion-maximal subset of the data consistent with the ontology. In our framework, these three semantics are used conjointly to identify answers with different levels of confidence. In addition to developing efficient algorithms for query answering over inconsistent DL-Lite knowledge bases, we address three problems that should support the adoption of this framework: (i) query result explanation, to help the user to understand why a given answer was (not) obtained under one of the three semantics, (ii) query-driven repairing, to exploit user feedback about errors or omissions in the query results to improve the data quality, and (iii) preferred repair semantics, to take into account the reliability of the data. For each of these three topics, we developed a formal framework, analyzed the complexity of the relevant reasoning problems, and proposed and implemented algorithms, which we empirically studied over an inconsistent DL-Lite benchmark we built. Our results indicate that even if the problems related to dealing with inconsistent DL-Lite knowledge bases are theoretically hard, they can often be solved efficiently in practice by using tractable approximations and features of modern SAT solvers.
39

Querying big RDF data : semantic heterogeneity and rule-based inconsistency / Interrogation de gros volumes données : hétérogénéité sémantique et incohérence à la base des règles

Huang, Xin 30 November 2016 (has links)
Le Web sémantique est la vision de la prochaine génération de Web proposé par Tim Berners-Lee en 2001. Avec le développement rapide des technologies du Web sémantique, de grandes quantités de données RDF existent déjà sous forme de données ouvertes et liées et ne cessent d'augmenter très rapidement. Les outils traditionnels d'interrogation et de raisonnement sur les données du Web sémantique sont conçus pour fonctionner dans un environnement centralisé. A ce titre, les algorithmes de calcul traditionnels vont inévitablement rencontrer des problèmes de performances et des limitations de mémoire. De gros volumes de données hétérogènes sont collectés à partir de différentes sources de données par différentes organisations. Ces sources de données présentent souvent des divergences et des incertitudes dont la détection et la résolution sont rendues encore plus difficiles dans le big data. Mes travaux de recherche présentent des approches et algorithmes pour une meilleure exploitation de données dans le contexte big data et du web sémantique. Nous avons tout d'abord développé une approche de résolution des identités (Entity Resolution) avec des algorithmes d'inférence et d'un mécanisme de liaison lorsque la même entité est fournie dans plusieurs ressources RDF décrite avec différentes sémantiques et identifiants de ressources URI. Nous avons également développé un moteur de réécriture de requêtes SPARQL basé le modèle MapReduce pour inférer les données implicites décrites intentionnellement par des règles d'inférence lors de l'évaluation de la requête. L'approche de réécriture traitent également de la fermeture transitive et règles cycliques pour la prise en compte de langages de règles plus riches comme RDFS et OWL. Plusieurs optimisations ont été proposées pour améliorer l'efficacité des algorithmes visant à réduire le nombre de jobs MapReduce. La deuxième contribution concerne le traitement d'incohérence dans le big data. Nous étendons l'approche présentée dans la première contribution en tenant compte des incohérences dans les données. Cela comprend : (1) La détection d'incohérence à base de règles évaluées par le moteur de réécriture de requêtes que nous avons développé; (2) L'évaluation de requêtes permettant de calculer des résultats cohérentes selon une des trois sémantiques définies à cet effet. La troisième contribution concerne le raisonnement et l'interrogation sur la grande quantité données RDF incertaines. Nous proposons une approche basée sur MapReduce pour effectuer l'inférence de nouvelles données en présence d'incertitude. Nous proposons un algorithme d'évaluation de requêtes sur de grandes quantités de données RDF probabilistes pour le calcul et l'estimation des probabilités des résultats. / Semantic Web is the vision of next generation of Web proposed by Tim Berners-Lee in 2001. Indeed, with the rapid development of Semantic Web technologies, large-scale RDF data already exist as linked open data, and their number is growing rapidly. Traditional Semantic Web querying and reasoning tools are designed to run in stand-alone environment. Therefor, Processing large-scale bulk data computation using traditional solutions will result in bottlenecks of memory space and computational performance inevitably. Large volumes of heterogeneous data are collected from different data sources by different organizations. In this context, different sources always exist inconsistencies and uncertainties which are difficult to identify and evaluate. To solve these challenges of Semantic Web, the main research contents and innovative approaches are proposed as follows. For these purposes, we firstly developed an inference based semantic entity resolution approach and linking mechanism when the same entity is provided in multiple RDF resources described using different semantics and URIs identifiers. We also developed a MapReduce based rewriting engine for Sparql query over big RDF data to handle the implicit data described intentionally by inference rules during query evaluation. The rewriting approach also deal with the transitive closure and cyclic rules to provide a rich inference language as RDFS and OWL. The second contribution concerns the distributed inconsistency processing. We extend the approach presented in first contribution by taking into account inconsistency in the data. This includes: (1)Rules based inconsistency detection with the help of our query rewriting engine; (2)Consistent query evaluation in three different semantics. The third contribution concerns the reasoning and querying over large-scale uncertain RDF data. We propose an MapReduce based approach to deal with large-scale reasoning with uncertainty. Unlike possible worlds semantic, we propose an algorithm for generating intensional Sparql query plan over probabilistic RDF graph for computing the probabilities of results within the query.
40

Federal policy evolution, newcomer integration and data reporting: the strengths and weaknesses of Canadian immigration policy

Ray, Devraj 25 January 2022 (has links)
Among the different immigration streams in Canada- family reunification, economic immigrants and refugee protection- newcomers have cited diverse experiences. This is problematic since Canada has a goal of increasing its population to a hundred million within the next seventy-eight years (Century Initiative, 2020). Sixty-two million new Canadians facing inconsistent settlement experiences would be considered a failure of this policy (Century Initiative, 2020). The literature of integration in Canada diverges into two streams: economic model of conformity and socio-cultural. According to the literature, Canada’s immigration policies use more of an economic conformity model than a socio-cultural conformity model of integration, with the former more widely cited. The strength of Canada’s economic conformity model was challenged when comparing immigration policies and immigrant outcomes with Australia and New Zealand. Using a case-oriented comparative analysis, performance indicators demonstrated that Canada had the strongest socio-cultural integration policies between the three cases. These findings were triangulated by a document analysis of Immigration, Refugee and Citizenship Canada’s departmental plans and performance reports from 1998 till 2020. Analyzing the evolution of immigration policies across the different streams found that the federal government decentralized policies and programs to the provincial level. This allowed newcomers to better adapt to the needs and environment of their specific provinces, confirming Canada’s socio-cultural approach to integration. Canada’s strength in its immigration policy resulted in the federal government’s ability to decentralize programs and policies to the provincial level such as welcoming and integrating new immigrants. The document analysis also found inconsistencies with performance indicators measuring integration across the three streams: economic immigrants were only assessed on economic integration factors whereas family reunified immigrants and refugees were only assessed on socio-cultural integration indicators. / Graduate

Page generated in 0.0932 seconds