• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the enumeration of pseudo-intents : choosing the order and extending to partial implications / De l'énumération des pseudo-intensions : choix de l'ordre et extension aux implications partielles

Bazin, Alexandre 30 September 2014 (has links)
Cette thèse traite du problème du calcul des implications, c'est-à-dire des régularités de la forme "quand il y a A, il y a B", dans des ensembles de données composés d'objets décrits par des attributs. Calculer toutes les implications peut être vu comme l'énumération d'ensembles d'attributs appelés pseudo-intensions. Nous savons que ces pseudo-intensions ne peuvent pas être énumérées avec un délai polynomial dans l'ordre lectique mais aucun résultat n'existe, à l'heure actuelle, pour d'autres ordres. Bien que certains algorithmes existants n'énumèrent pas forcément dans l'ordre lectique, aucun n'a un délai polynomial. Cette absence de connaissances sur les autres ordres autorise toujours l'existence d'un algorithme avec délai polynomial et le trouver serait une avancée utile et significative. Malheureusement, les algorithmes actuels ne nous autorisent pas à choisir l'ordre d'énumération, ce qui complique considérablement et inutilement l'étude de l'influence de l'ordre dans la complexité. C'est donc pour aller vers une meilleure compréhension du rôle de l'ordre dans l'énumération des pseudo-intensions que nous proposons un algorithme qui peut réaliser cette énumération dans n'importe quel ordre qui respecte la relation d'inclusion. Dans la première partie, nous expliquons et étudions les propriétés de notre algorithme. Comme pour tout algorithme d'énumération, le principal problème est de construire tous les ensembles une seule fois. Nous proposons pour cela d'utiliser un arbre couvrant, lui-même basé sur l'ordre lectique, afin d'éviter de multiples constructions d'un même ensemble. L'utilisation de cet arbre couvrant au lieu de l'ordre lectique classique augmente la complexité spatiale mais offre plus de flexibilité dans l'ordre d'énumération. Nous montrons que, comparé à l'algorithme Next Closure bien connu, le nôtre effectue moins de fermetures logiques sur des contextes peu denses et plus de fermetures quand le nombre moyen d'attributs par objet dépasse 30% du total. La complexité spatiale de l'algorithme est aussi étudiée de façon empirique et il est montré que des ordres différents se comportent différemment, l'ordre lectique étant le plus efficace. Nous postulons que l'efficacité d'un ordre est fonction de sa distance à l'ordre utilisé dans le test de canonicité. Dans la seconde partie, nous nous intéressons au calcul des implications dans un cadre plus complexe : les données relationnelles. Dans ces contextes, les objets sont représentés à la fois par des attributs et par des relations avec d'autres objets. Le besoin de représenter les informations sur les relations produit une augmente exponentielle du nombre d'attributs, ce qui rend les algorithmes classiques rapidement inutilisables. Nous proposons une modification de notre algorithme qui énumère les pseudo-intensions de contextes dans lesquels l'information relationnelle est représentée par des attributs. Nous fournissons une étude rapide du type d'information relationnelle qui peut être prise en compte. Nous utilisons l'exemple des logiques de description comme cadre pour l'expression des données relationnelles. Dans la troisième partie, nous étendons notre travail au domaine plus général des règles d'association. Les règles d'association sont des régularités de la forme ``quand il y a A, il y a B avec une certitude de x%''. Ainsi, les implications sont des règles d'association certaines. Notre algorithme calcule déjà une base pour les implications et nous proposons une très simple modification et montrons qu'elle lui permet de calculer la base de Luxenburger en plus de la base de Duquenne-Guigues. Cela permet à notre algorithme de calculer une base de cardinalité minimale pour les règles d'association. / This thesis deals with the problem of the computation of implications, which are regularities of the form "when there is A there is B", in datasets composed of objects described by attributes. Computing all the implications can be viewed as the enumeration of sets of attributes called pseudo-intents. It is known that pseudointents cannot be enumerated with a polynomial delay in the lectic order but no such result exists for other orders. While some current algorithms do not enumerate in the lectic order, none of them have a polynomial delay. The lack of knowledge on other orders leaves the possibility for a polynomial-delay algorithm to exist and inding it would be an important and useful step. Unfortunately, current algorithms do not allow us to choose the order so studying its inuence on the complexity of the enumeration is harder than necessary. We thus take a first step towards a better understanding of the role of the order in the enumeration of pseudo-intents by providing an algorithm that can enumerate pseudo-intents in any order that respects the inclusion relation.In the first part, we explain and study the properties of our algorithm. As with all enumeration algorithms, the first problem is to construct all the sets only once.We propose to use a spanning tree, itself based on the lectic order, to avoid multiple constructions of a same set. The use of this spanning tree instead of the classic lectic order increases the space complexity but others much more exibility in the enumeration order. We show that, compared to the well-known Next Closure algorithm, ours performs less logical closures on sparse contexts and more once the average number of attributes per object exceeds 30%. The space complexity of the algorithm is also empirically studied and we show that different orders behave differently with the lectic order being the most efficient. We postulate that the efficiency of an order is function of its distance to the order used in the canonicity test. In the second part, we take an interest in the computation of implications in a more complex setting : relational data. In these contexts, objects are represented by both attributes and binary relations with other objects. The need to represent relation information causes an exponential increase in the number of attributes so naive algorithms become unusable extremely fast. We propose a modification of our algorithm that enumerates the pseudo-intents of contexts in which relational information is represented by attributes. A quick study of the type of relational information that can be considered is provided. We use the example of description logics as a framework for expressing relational data. In the third part, we extend our work to the more general domain of association rules. Association rules are regularities of the form \when there is A there is B with x% certainty" so implications are association rules with 100% certainty. Our algorithm already computes a basis for implications so we propose a very simple modification and demonstrate that it can compute the Luxenburger basis of a context along with the Duquenne-Guigues basis. This effectively allows our algorithm to compute a basis for association rules that is of minimal cardinality.
2

How policy travels : the course and effects of school funding policy on equity at different levels of the education system

Molale, Itumeleng Samuel 10 September 2004 (has links)
Successful implementation of equity driven policies has proven to be a difficult and vexing issue especially in developing countries. As a result, many educational reforms were found in practice to be at variance with their founding objectives. The purpose of this exploratory and descriptive study therefore was to trace the implementation pathway traveled by the National Norms and Standards for School Funding (NNSSF) Policy from the center (National Department of Education) to the periphery (school level). This was informed by the necessity to explain where, how and why the discrepancy developed between the policy intentions and educational outcomes (i.e. effects). The NNSSF policy aimed at the fundamental transformation of the schools since it requires the following things to happen : the delegation of financial management and authority to the School Governing Body (SGB), the day-to-day management of curriculum delivery, the generation of additional funds, and the improvement and maintenance of school infrastructure. The allocation and management of these functions constitute in what is called “self-managing schools”, thus freeing such schools from the bureaucratic processes associated with centralization. This (research) investigation is guided by two research questions: 1. How was the new School Funding Policy (SFP) implemented within and through the different levels of the education system? 2. What were the effects of the National Norms and Standards For School Funding (NNSSF) policy on equity at school level? In essence, this research explains how different education stakeholders understand the new funding policy, and with what effects. In tracing the course of the NNSSF policy, I paid special attention to policy breakdown by comparing and contrasting the views and estimations of various implementers across the four levels of the education system namely: national, provincial, regional and school levels. This research on the understandings of policy was not restricted to formal definitions of policy, but went further to seek understanding on the practical unfolding of the funding policy separately, and in relation to other policies. Data was collected over a period of seventeen (17) months. In this regard, I used multiple methods of data collection including profiling, semi-structured interviews, critical observations of the setting, document analysis, photographic records and structured questionnaires. The main findings of the study include the following : ¨ The National officials showed a legalistic and formal understanding of the NNSSF policy, but such understanding lacked a holistic, coherent and integrated approach to equity. ¨ The understanding of the policy varied among the provincial officials. But such understanding again demonstrated a bureaucratic or functionalist-oriented approach to the implementation of the NNSSF policy. This suggests that much emphasis was placed on observing protocol and official communication of the new policy.. ¨ The regional policy implementers demonstrated a limited understanding of the policy. Such an understanding could be characterised as a disengaging approach to policy and a sense of despair on how the implementation unfolded. ¨ The effects of the NNSSF policy on equity differed across the five case study schools. For example, previously advantaged schools (like Siege) experienced negative effects due to inadequate state allocation. This had ripple effects in the form of exorbitant school fees and the issuing of a lawsuit against a parent who was not able to pay such high fees. ¨ The previously disadvantaged schools were able to do their own planning which led to the timeous acquisition of resources as a result of the financial allocation to the school level. The key findings as well as the implications of this research only make this study unique, but also offer critical insights into policy implementation in developing contexts. The fact that the research involved the collection of data at four levels of the education system over a period of seventeen months generated extensive data sets for policy analysis. The collection of both qualitative (contextual) and quantitative data contributed to strengthening the validity and reliability of the study as a whole. Most importantly, the knowledge gained from this study not only offers policy lessons for the North-West province, but it yields important insights for policy implementers across the education system. / Thesis (PhD (Education Management and Policy Studies))--University of Pretoria, 2004. / Education Management and Policy Studies / unrestricted

Page generated in 0.0875 seconds