11 |
MSC in Tendon and Joint Disease: The Context-Sensitive Link Between Targets and Therapeutic MechanismsRoth, Susanne Pauline, Burk, Janina, Brehm, Walter, Troillet, Antonia 08 June 2023 (has links)
Mesenchymal stromal cells (MSC) represent a promising treatment option for tendon
disorders and joint diseases, primarily osteoarthritis. Since MSC are highly context-sensitive to their microenvironment, their therapeutic efficacy is influenced by their
tissue-specific pathologically altered targets. These include not only cellular
components, such as resident cells and invading immunocompetent cells, but also
components of the tissue-characteristic extracellular matrix. Although numerous in vitro
models have already shown potential MSC-related mechanisms of action in tendon and
joint diseases, only a limited number reflect the disease-specific microenvironment and
allow conclusions about well-directed MSC-based therapies for injured tendon and joint-associated tissues. In both injured tissue types, inflammatory processes play a pivotal
pathophysiological role. In this context, MSC-mediated macrophage modulation seems to
be an important mode of action across these tissues. Additional target cells of MSC
applied in tendon and joint disorders include tenocytes, synoviocytes as well as other
invading and resident immune cells. It remains of critical importance whether the context-sensitive interplay between MSC and tissue- and disease-specific targets results in an
overall promotion or inhibition of the desired therapeutic effects. This review presents the
authors’ viewpoint on disease-related targets of MSC therapeutically applied in tendon and
joint diseases, focusing on the equine patient as valid animal model.
|
12 |
Context-Sensitivity Influences German and Chinese Preschoolers’ Comprehension of Indirect CommunicationSchulze, Cornelia, Buttelmann, David, Zhu, Liqi, Saalbach, Henrik 20 November 2023 (has links)
Making inferences in communication is a highly context-dependent endeavor. Previous research
found cultural variations for context-sensitivity as well as for communication comprehension.
However, the relative impact of culture and context-sensitivity on communication
comprehension has not been investigated so far. The current study aimed at investigating this
interplay and tested 4- and 6-year-old children from Germany (n = 132) and China (n = 129).
Context-sensitivity was measured with an adapted version of the Ebbinghaus illusion. In this
task, children have to discriminate the size of two target circles that only appear to be of similar
size due to context circles surrounding the target circles. As expected, performance scores
indicated higher degrees of context-sensitivity in Chinese compared to German children and
that 6-year-olds were more context-sensitive than 4-year-olds. Further, in an object-choice
communication-comprehension task, children watched videos with puppets performing everyday
activities (e.g., pet care) and had to choose between two options (e.g., dog or rabbit). A
puppet expressed what she wanted either directly (“I want the rabbit”) or indirectly (“I have
a carrot”). The children had to choose one option to give to the puppet. In both cultures,
6-year-olds outperformed 4-year-olds and children understood direct communication better
than indirect communication. Culture was found to affect children’s processing speed of direct
communication. Moreover, culture influenced children’s context-sensitivity while contextsensitivity
influenced children’s accuracy in the indirect (but not the direct) communication
task. These findings demonstrate that taking context into account is especially important when
we are confronted with indirect communication.
|
13 |
Community-based early learning in Solomon Islands : cultural and contextual dilemmas influencing program sustainabilityBurton, Lindsay Julia January 2011 (has links)
The Solomon Islands (SI), a small developing nation in the South Pacific, demonstrates an emergent community-based kindergarten model with the potential to promote context and culture relevant early learning and development. SI early childhood education (ECE) particularly rose in prominence with a 2008 national policy enactment requiring all children to attend three years of kindergarten as prerequisite for primary school entry. However, these ECE programs remain severely challenged by faltering community support. Internationally, many ECE programs dramatically resemble a universalized Western-based model, with a decidedly specific discourse for “high quality” programs and practices for children ages 0-8. Often these uncritical international transfers of Euro-American ideologies promote restricted policies and practices. This has resulted in a self-perpetuating set of practices and values, which arguably prevent recognition of, and efforts to reinvent, more culturally-relevant, sustainable programs for the Majority World. Based on the Kahua region (est. pop. 4,500) of Makira-Ulawa Province, this collaborative, ethnographically-inspired, case study explores how community characteristics have affected the cultural and contextual sustainability of community-based ECE in remote villages. The study traces historical and cultural influences to present-day SI ECE. Subsequently, it explores the re-imagined SI approach to formal ECE program design, remaining challenges preventing these programs from being sustained by communities, and potential community-wide transformations arising from these initiatives. To achieve this, the study collaborated with stakeholders from all levels of SI society through extensive participant-observations, interviews, and participatory focus groups. Findings aspire to enlighten regional sustainable developments and resilient behaviors relating to ECE. Key research findings suggest five overarching principles influencing kindergarten sustainability: presence of “champion” for the ECE vision; community ownership-taking, awareness-building, and cooperation-maintenance; and program cultural/contextual sensitivity and relevance. These elements were found to be strongly linked with an intergenerational cultural decay in the Kahua region, as conceptualized through a model of Cyclically-Sustained Kindergarten Mediocrity.
|
14 |
Context-sensitive Points-To Analysis : Comparing precision and scalability<!--[if gte mso 9]><xml> <w:data>FFFFFFFF00000000000005005400650078007400310000000B0055006E00640065007200720075006200720069006B0000000000000000000000000000000000000000000000</w:data></xml><![endif]-->Kovalov, Ievgen January 2012 (has links)
Points-to analysis is a static program analysis that tries to predict the dynamic behavior of programs without running them. It computes reference information by approximating for each pointer in the program a set of possible objects to which it could point to at runtime. In order to justify new analysis techniques, they need to be compared to the state of the art regarding their accuracy and efficiency. One of the main parameters influencing precision in points-to analysis is context-sensitivity that provides the analysis of each method separately for different contexts it was called on. The problem raised due to providing such a property to points-to analysis is decreasing of analysis scalability along with increasing memory consumption used during analysis process. The goal of this thesis is to present a comparison of precision and scalability of context-sensitive and context-insensitive analysis using three different points-to analysis techniques (Spark, Paddle, P2SSA) produced by two research groups. This comparison provides basic trade-offs regarding scalability on the one hand and efficiency and accuracy on the other. This work was intended to involve previous research work in this field consequently to investigate and implement several specific metrics covering each type of analysis regardless context-sensitivity – Spark, Paddle and P2SSA. These three approaches for points-to analysis demonstrate the intended achievements of different research groups. Common output format enables to choose the most efficient type of analysis for particular purpose.
|
15 |
Élaboration d'un modèle de découverte et de composition des services web mobiles / Implementation of a mobile web services discovery and composition modelBen Njima, Cheyma 06 July 2017 (has links)
Au cours des dernières décennies, Internet a connu une révolution et une croissance exponentielle.A la suite de cette croissance, un grand nombre de services web et d’applications ont émergé pour répondre aux différents besoins des consommateurs. En même temps, l’industrie du réseau mobile est devenue omniprésente, ce qui rend la plupart des utilisateurs inséparables de leurs terminaux mobiles. La combinaison de la technologie mobile et des services web fournit un nouveau paradigme appelé services web mobiles. Ainsi, la consommation des services web a` partir des appareils mobiles émerge en proposant plusieurs facilites´ aux utilisateurs et en imposant plus de manipulations de ces services.En effet, afin que les utilisateurs trouvent des services répondant a` leurs besoins, un mécanisme de découverte est nécessaire, par ailleurs, les demandes sont devenues non seulement plus complexes mais aussi plus dynamiques, un service unique qui offre une fonctionnalité simple et primitive est devenu insuffisant pour satisfaire les besoins et les exigences complexes. Par conséquent, la combinaison de multiples services pour fournir un service composite est de plus en plus utilisée demandée. Nous parlons ainsi des mécanismes de découverte et de composition des services web mobiles. Ces deux paradigmes sont mutuellement liés et complémentaires.La découverte et la composition des services web dans un environnement mobile soulèvent plusieurs défis qui n’existent pas dans un environnement classique (non mobile). Parmi ces défis se trouve les contraintes limitées de l’appareil mobile, appelé dans ce travail contexte statique, ainsi que le changement de contexte qui est duˆ principalement a` la mobilité du dispositif, appelé contexte dynamique.Ainsi, l’objet de la présente thèse est de proposer un Framework de composition de services web mobile englobant deux approches complémentaires. Une première approche proposée est consacrée a` la découverte des services web mobiles appelée MobiDisc et une deuxième qui propose une solution a` la problématique de composition dans un contexte dynamique. Notre première approche exploite le contexte statique avec les propriétés de QoS et les préférences´ utilisateurs dans les descriptions sémantiques des services et de la requête utilisateur afin d’augmenter l’exactitude du processus de découverte. Quand a` l’approche de composition, elle met l’accent sur le contexte dynamique qui peut modifier le résultat de la composition. L’objectif est de déterminer la sensibilité des services au contexte dynamique et de générer des plans de composition pour l’utilisateur tries´ selon leurs valeurs de sensibilité globale lui permettant de choisir la meilleure composition. / Over the last two decades, Internet has grown exponentially. causing the emergence of web ser-vices and applications that meet the different needs of the consumers. During the same period, the mobile network industry has become ubiquitous, making most users inseparable from their mobile devices. So the combination of mobile technology and web services provides a new paradigm named mobile web services. Thus, the consumption of web services from mobile devices emerges by offering several facilities to users and requiring greater manipulation of these services such as discovery, composition and execution.Indeed, in order for users to find services that meet their requirements, a discovery mechanism is needed. Since requests have become not only more complex, but also more dynamic, a single service that offers simple and primitive functionality has become insufficient to satisfy the complex requirements. Therefore, the combination of multiple services to provide a composite service is more and more requested. We talk about mobile web service discovery and composition. These two paradigms are mutually linked and complementary.The discovery and composition of web services in a mobile environment raise several challenges that do not exist in a traditional (non-mobile) environment. Among these challenges are the limited constraints of the mobile device, called in this work static context, as well as the change of context which is due mainly to the mobility of the device which called dynamic context.In this thesis we propose a framework for the composition of mobile web services encompassing two complementary approaches. A first proposed approach called MobiDisc, speaking about the discovery of mobile web services and a second that proposes a solution to the problem of composition in a dynamic context. Our first approach uses the static context with QoS properties and user preferences in the semantic descriptions of services and the user query to increase the accuracy of the discovery process. As for the second compositional approach, it focuses on the dynamic context that can modify the composition result. The objective is to determine the sensitivity of the services to the dynamic context and to generate composition plans to the user ordered according to a sensitivity value.
|
16 |
Physical Activity Predicts Emotion-Context-SensitivityShields, Morgan Christina 16 May 2014 (has links)
No description available.
|
17 |
Gestion de contexte pour l'optimisation de l'accès et l'adaptation des services sur des environnements hétérogènes / Context management for network access optimisation and services adaptation in heterogeneous environmentsLoukil, Mehdi 20 December 2012 (has links)
Dans le domaine des TIC, les services de demain seront certainement basés sur des systèmes ubiquitaires, omniprésents et pervasifs. Ces systèmes devront prendre en considération différents paramètres provenant de l’environnement de l’utilisateur, c’est à dire son contexte. Le contexte de l’utilisateur peut être composé d’informations statiques ou dynamiques, objectives ou subjectives, quantitatives ou qualitatives. Il peut inclure des données telles que la localisation géographique, les caractéristiques du terminal utilisé, la température ambiante, l’humeur de l’utilisateur. Afin d’améliorer la QoS et la QoE, les services et les systèmes doivent être adaptés aux changements du contexte des utilisateurs. Le contexte doit donc être collecté et interprété et les règles d’adaptation du système doivent être définies. Sur les systèmes étendus, riches, dynamiques et hétérogènes, tels que ceux considéré dans le cadre de cette thèse, ces opérations doivent être automatisées. Vu la quantité et la complexité des données contextuelles à considérer, l’utilisation de la sémantique dans la gestion de contexte peut faciliter cette automatisation et ouvrir la porte au raisonnement et à l’adaptation automatiques. Aujourd’hui, peu de solutions viables existent pour cette problématique. Nous proposons alors d’utiliser et d’adapter des mécanismes et technologies provenant du web sémantique pour décrire et manipuler les informations de contexte. Dans un premier temps, nous avons proposé une méthodologie de conception qui nous permit de proposer « Ubiquity-Ont » : une ontologie générique au domaine des TIC, flexible et extensible. Les données de contexte ont alors été décrites sous forme de concepts et d’instances, reliés par des relations sémantiques. Nous avons ensuite proposé une architecture overlay, composée de deux niveaux de vitalisation et permettant d’intégrer un gestionnaire de contexte, basé sur la sémantique, sur des environnements réseaux et services. Cette solution overlay permet de (a) masquer l’hétérogénéité des composants du système et (b) d’augmenter virtuellement les entités du système existant par les capacités nécessaires à la manipulation et au raisonnement sur les données sémantiques du contexte. Nos propositions ont étés implémentées et testées sur une plateforme réelle et appliquées à deux cas d’études : Gestion de la mobilité sur des environnements de réseaux d’accès hétérogènes et Optimisation de la consommation d’énergie dans les terminaux mobiles / Future Information and Telecommunication Systems are expected to be pervasive and ubiquitous solutions, able to consider users’ context and to automatically adapt to their environments. Traditional configuration and management tools are not adapted. The richness, the heterogeneity and the complexity of the upcoming systems require automated solutions able to gather contextual information, to reason on them and to make the appropriate adaptation decisions. The representation and the sharing of contextual information is a key issue. In this thesis, we proposed and used a methodology to conceive « Ubiquity-Ont », a generic ontology dedicated to Information and Telecommunication Systems. Contextual information are the described through semantic concepts, instances and relations. We then proposed an overlay architecture, composed of two virtualization layers that can integrate a semantic context management framework over existing networking environments. This architecture is able (a) to hide any heterogeneity among the system components and (b) to augment the different entities with additional capacities for context gathering, reasoning and sharing operations. The proposed solutions were then implemented and tested in Lab for two applications. The fisrt is related to mobility management over heterogeneous Wireless Networks and the second aims to power optimization on mobile terminals. These two case studies helped in proving and enhancing the proposed solutions
|
18 |
Program Slicing for Modern Programming LanguagesGalindo Jiménez, Carlos Santiago 24 September 2025 (has links)
[ES] Producir software eficiente y efectivo es una tarea que parece ser tan difícil ahora como lo era para los primeros ordenadores. Con cada mejora de hardware y herramientas de desarrollo (como son compiladores y analizadores), la demanda de producir software más rápido y más complejo ha ido aumentando. Por tanto, todos estos análisis auxiliares ahora son una parte integral del desarrollo de programas complejos.
La fragmentación de programas es una técnica de análisis estático, que da respuesta a ¿Qué partes del programa pueden afectar a esta instrucción? Su aplicación principal es la depuración de programas, porque puede acotar la zona de código a la que el programador debe prestar atención mientras busca la causa de un error. También tiene otras muchas aplicaciones, como pueden ser la paralelización y especialización de programas, la comprensión de programas y el mantenimiento. En los últimos años, su uso más común ha sido como preproceso a otros análisis con alto coste computacional, para reducir el tamaño del programa a procesar, y, por tanto, el tiempo de ejecución de estos. La estructura de datos más popular para fragmentar programas es el system dependence graph (SDG), un grafo dirigido que representa las instrucciones de un programa como vértices, y sus dependencias como arcos. Los dos tipos principales de dependencias son las de control y las de datos, que encapsulan el flujo de control y datos en todas las ejecuciones posibles de un programa.
El área de lenguajes de programación está en eterno cambio, ya sea por la aparición de nuevos lenguajes o por el lanzamiento de nuevas características en lenguajes existentes, como pueden ser Java o Erlang. Sin embargo, la fragmentación de programas se definió originalmente para el paradigma imperativo. Aun así, hay características populares en lenguajes imperativos, como las arrays y las excepciones, que aún no tienen una representación eficiente y/o completa en el SDG. Otros paradigmas, como el funcional o el orientado a objetos, sufren también de un soporte parcial en el SDG.
Esta tesis presenta mejoras para construcciones comunes en la programación moderna, dividiendo contribuciones en las enfocadas a dependencias de control y las enfocadas a datos. Para las primeras, especificamos una nueva representación de instrucciones catch, junto a una descripción completa del resto de instrucciones relacionadas con excepciones. También analizamos las técnicas punteras para saltos incondicionales (p.e., break), y mostramos los riesgos de combinarlas con otras técnicas para objetos, llamadas o excepciones. A continuación, ponemos nuestra mirada en la concurrencia, con una formalización de un depurador de especificaciones CSP reversible y causal-consistente. En cuanto a las dependencias de datos, se enfocan en técnicas sensibles al contexto (es decir, más precisas en presencia de rutinas y sus llamadas). Exploramos las dependencias de datos generadas en programas concurrentes por memoria compartida, redefiniendo las dependencias de interferencia para hacerlas sensibles al contexto. A continuación, damos un pequeño rodeo por el campo de la indecidibilidad, en el que demostramos que ciertos tipos de análisis de datos sobre programas con estructuras de datos complejas son indecidibles. Finalmente, ampliamos un trabajo previo sobre la fragmentación de estructuras de datos complejas, combinándolo con la fragmentación tabular, que la hace sensible al contexto.
Además, se han desarrollado o extendido múltiples librerías de código con las mejoras mencionadas anteriormente. Estas librerías nos han permitido realizar evaluaciones empíricas para algunos de los capítulos, y también han sido publicadas bajo licencias libres, que permiten a otros desarrolladores e investigadores extenderlas y contrastarlas con sus propuestas, respectivamente. Las herramientas resultantes son dos fragmentadores de código para Java y Erlang, y un depurador de CSP reversible y causal-consistente. / [CA] La producció de programari eficient i eficaç és una tasca que resulta tan difícil hui dia com ho va ser durant l'adveniment dels ordinadors. Per cada millora de maquinari i ferramentes per al desenvolupament, augmenta sovint la demanda de programes, així com la seua complexitat. Com a conseqüència, totes aquestes anàlisis auxiliars esdevenen una part integral del desenvolupament de programari.
La fragmentació de programes és una tècnica d'anàlisi estàtica, que respon a "Quines parts d'aquest programa poden afectar a aquesta instrucció?". L'aplicació principal d'aquesta tècnica és la depuració de programes, per la seua capacitat de reduir la llargària d'un programa sense canviar el seu funcionament respecte a una instrucció que està fallant, delimitant així l'àrea del codi en què el programador busca l'origen de l'errada. Tot i això, té moltes altres aplicacions, com la paral·lelització i especialització de programes o la comprensió de programes i el seu manteniment. Durant els darrers anys, l'ús més freqüent de la fragmentació de programes ha sigut com a <<preprocés>> abans d'altres anàlisis amb un alt cost computacional, per tal de reduir-ne el temps requerit per realitzar-les. L'estructura de dades més popular per fragmentar programes és el system dependence graph (SDG), un graf dirigit representant-ne les instruccions d'un programa amb vèrtexs i les seues dependències amb arcs. Els dos tipus principals de dependència són el de control i el de dades, aquests encapsulen el flux de control i dades a totes les possibles execucions d'un programa.
L'àrea dels llenguatges de programació s'hi troba en constant evolució, o bé per l'aparició de nous llenguatges, o bé per noves característiques per als preexistents, com poden ser Java o Erlang. No obstant això, la fragmentació de programes s'hi va definir originalment per al paradigma imperatiu. Tot i que, també hi trobem característiques populars als llenguatges imperatius, com els arrays i les excepcions, que encara no en tenen una representació eficient i/o completa al SDG. Altres paradigmes, com el funcional o l'orientat a objectes, pateixen també d'un suport reduit al SDG.
Aquesta tesi presenta millores per a construccions comunes de la programació moderna, dividint les contribucions entre aquelles enfocades a les dependències de control i aquelles enfocades a dades. Per a les primeres, hi especifiquem una nova representació d'instruccions catch, junt amb una descripció de la resta d'instruccions relacionades amb excepcions. També hi analitzem les tècniques capdavanteres de fragmentació de salts incondicionals, i hi mostrem els riscs de combinar-ne-les amb altres tècniques per a objectes, instruccions de crida i excepcions. A continuació, hi posem la nostra atenció en la concurrència, amb una formalització d'un depurador d'especificacions CSP reversible i causal-consistent. Respecte a les dependències de dades, dirigim els nostres esforços a produir tècniques sensibles al context (és a dir, que es mantinguen precises en presència de procediments). Hi explorem les dependències de dades generades en programes concurrents amb memòria compartida, redefinint-ne les dependències d'interferència per a fer-ne-les sensibles al context. Seguidament, hi demostrem la indecidibilitat d'alguns tipus d'anàlisis de dades per a programes amb estructures de dades complexes. Finalment, hi ampliem un treball previ sobre la fragmentació d'estructures de dades complexes, combinant-lo amb la fragmentació tabular, fent-hi-la sensible al context.
A més a més, s'han desenvolupat o estés diverses llibreries de codi amb les millores esmentades prèviament. Aquestes llibreries ens han permés avaluar empíricament alguns dels capítols i també han sigut publicades sota llicències lliures, fet que permet a altres desenvolupadors i investigadors poder estendre-les i contrastar-les, respectivament. Les ferramentes resultants són dos fragmentadors de codi per a Java i Erlang, i un depurador CSP. / [EN] Producing efficient and effective software is a task that has remained difficult since the advent of computers. With every improvement on hardware and developer tooling (e.g., compilers and checkers), the demand for software has increased even further. This means that auxiliary analyses have become integral in developing complex software systems.
Program slicing is a static analysis technique that gives answers to "What parts of the program can affect a given statement?", and similar questions. Its main application is debugging, as it can reduce the amount of code on which a programmer must look for a mistake or bug. Other applications include program parallelization and specialisation, program comprehension, and software maintenance. Lately, it has mostly been applied as a pre-processing step in other expensive static analyses, to lower the size of the program and thus the analyses' runtime. The most popular data structure in program slicing is the system dependence graph (SDG), which represents statements as nodes and dependences as arcs between them. The two main types of dependences are control and data dependences, which encapsulate the control and data flow throughout every possible execution of a program.
Programming languages are an ever-expanding subject, with new features coming to new releases of popular and up-and-coming languages like Python, Java, Erlang, Rust, and Go. However, program slicing was originally defined for (and has been mostly focused on) imperative programming languages. Even then, some popular elements of the imperative paradigm, such as arrays and exceptions do not have an efficient or sometimes complete representation in the SDG. Other paradigms, such as functional or object-oriented also suffer from partial support in the SDG.
This thesis presents improvements for common programming constructs, and its contributions are split into control and data dependence. For the former, we (i) specify a new representation of catch statements, along with a full description of other exception-handling constructs. We also (ii) analyse the current state-of-the-art technique for unconditional jumps (e.g., break or return), and show the risks of combining it with other popular techniques. Then, we focus on concurrency, with a (iii) formalisation of a reversible, causal-consistent debugger for CSP specifications. Switching to data dependences, we focus our contributions on making existing techniques context-sensitive (i.e., more accurate in the presence of routines or functions). We explore the data dependences involved in shared-memory concurrent programs, (iv) redefining interference dependence to make it context-sensitive. Afterwards, we take a small detour to (v) explore the decidability of various data analyses on programs with (and without) complex data structures and routine calls. Finally, we (vi) extend our previous work on slicing complex data structures to combine it with tabular slicing, which provides context-sensitivity.
Additionally, throughout this thesis, multiple supporting software libraries have been written or extended with the aforementioned improvements to program slicing. These have been used to provide empirical evaluations, and are available under libre software licenses, such that other researchers and software developers may extend or contrast them against their own proposals. The resulting tools are two program slicers for Java and Erlang, and a causal-consistent reversible debugger for CSP. / Galindo Jiménez, CS. (2024). Program Slicing for Modern Programming Languages [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/211183
|
19 |
Perspective in context : relative truth, knowledge, and the first personKindermann, Dirk January 2012 (has links)
This dissertation is about the nature of perspectival thoughts and the context-sensitivity of the language used to express them. It focuses on two kinds of perspectival thoughts: ‘subjective' evaluative thoughts about matters of personal taste, such as 'Beetroot is delicious' or 'Skydiving is fun', and first-personal or de se thoughts about oneself, such as 'I am hungry' or 'I have been fooled.' The dissertation defends of a novel form of relativism about truth - the idea that the truth of some (but not all) perspectival thought and talk is relative to the perspective of an evaluating subject or group. In Part I, I argue that the realm of ‘subjective' evaluative thought and talk whose truth is perspective-relative includes attributions of knowledge of the form 'S knows that p.' Following a brief introduction (chapter 1), chapter 2 presents a new, error-theoretic objection against relativism about knowledge attributions. The case for relativism regarding knowledge attributions rests on the claim that relativism is the only view that explains all of the empirical data from speakers' use of the word "know" without recourse to an error theory. In chapter 2, I show that the relativist can only account for sceptical paradoxes and ordinary epistemic closure puzzles if she attributes a problematic form of semantic blindness to speakers. However, in 3 I show that all major competitor theories - forms of invariantism and contextualism - are subject to equally serious error-theoretic objections. This raises the following fundamental question for empirical theorising about the meaning of natural language expressions: If error attributions are ubiquitous, by which criteria do we evaluate and compare the force of error-theoretic objections and the plausibility of error attributions? I provide a number of criteria and argue that they give us reason to think that relativism's error attributions are more plausible than those of its competitors. In Part II, I develop a novel unified account of the content and communication of perspectival thoughts. Many relativists regarding ‘subjective' thoughts and Lewisians about de se thoughts endorse a view of belief as self-location. In chapter 4, I argue that the self-location view of belief is in conflict with the received picture of linguistic communication, which understands communication as the transmission of information from speaker's head to hearer's head. I argue that understanding mental content and speech act content in terms of sequenced worlds allows a reconciliation of these views. On the view I advocate, content is modelled as a set of sequenced worlds - possible worlds ‘centred' on a group of individuals inhabiting the world at some time. Intuitively, a sequenced world is a way a group of people may be. I develop a Stalnakerian model of communication based on sequenced worlds content, and I provide a suitable semantics for personal pronouns and predicates of personal taste. In chapter 5, I show that one of the advantages of this model is its compatibility with both nonindexical contextualism and truth relativism about taste. I argue in chapters 5 and 6 that the empirical data from eavesdropping, retraction, and disagreement cases supports a relativist completion of the model, and I show in detail how to account for these phenomena on the sequenced worlds view.
|
Page generated in 0.099 seconds