• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 75
  • 30
  • 9
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 269
  • 47
  • 34
  • 32
  • 24
  • 24
  • 22
  • 22
  • 22
  • 20
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Equations de Monge-Ampère complexes paraboliques / Parabolic complex Monge Ampère equations

Do, Hoang Son 29 September 2015 (has links)
Le but de cette thèse est de contribuer à la compréhension des équations de Monge-Ampère complexes paraboliques sur des domaines de Cn. Cette équation a un lien étroit avec le flot de Kähler-Ricci. Notre étude se concentre sur les cas où la condition initiale n'est pas régulière. Nous voulons démontrer l'existence de solutions satisfaisant la continuité jusqu'à la frontière et jusqu'au temps initial. / The aim of this thesis is to make a contribution to understanding parabolic complex Monge-Ampère equations on domains of Cn. Our study is centered around cases where the initial condition is irregular. We want to prove the existence of solutions which satisfies continuity up to the boundary and continuity up to the initial time.
32

Modelo de Probabilidade a Priori Para Estimativa de Custos Na Gestão de Projetos Em Construção Civil

PALHA, Rachel Perez 26 December 2012 (has links)
Submitted by Eduarda Figueiredo (eduarda.ffigueiredo@ufpe.br) on 2015-03-06T12:57:03Z No. of bitstreams: 2 Rachel Perez Palha_versão final.pdf: 3770021 bytes, checksum: 82559a6c2e514c1cb8887d589c8e9f10 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-06T12:57:03Z (GMT). No. of bitstreams: 2 Rachel Perez Palha_versão final.pdf: 3770021 bytes, checksum: 82559a6c2e514c1cb8887d589c8e9f10 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-12-26 / Os Projetos de Construção Civil geralmente são gerenciados seguindo as metodologias convencionais de Gestão de Projetos. Atualmente, entretanto, muitas Empresas têm optado pela modalidade de Contrato por Aliança ou Administração devido às vantagens geradas relativas à transparência nas informações e acesso às tecnologias empregadas. Este tipo de contratação, entretanto, dificulta o acompanhamento das obras, pois suas condições comerciais ferem uma série de condicionantes impostos pelos modelos de gestão normalmente utilizados. A fim de solucionar este problema, foi desenvolvida nesta dissertação um modelo para estimativa de custo de um projeto contratado sob regime de Aliança. Foi feito um estudo de caso onde o projeto já se encontrava em execução e vinha enfrentando diversos problemas para cálculo de seu custo final, desta forma decidiu-se entrevistar 04 especialistas a fim de determinar a função densidade de probabilidade das variáveis mais significativas quanto ao valor total da obra utilizando o conhecimento a priori destes. Foi utilizado o Diagrama de Pareto a fim de escolher as 11 variáveis mais significativas para aplicar o processo de elicitação da probabilidade e as entrevistas foram feitas com o intuito de obter a função de probabilidade acumulada. A variáveis escolhidas foram (1) mão de obra de montagem de grade; (2) fabricação de brita para lastro; (3) fabricação de dormentes; (4) bueiros celulares; (5) concreto de drenagem; (6) formas para concreto de drenagem; (7) concreto de obras de arte especiais; (8) forma e escoramento para concreto de obras de arte especiais; (9) escavação, carga e transporte de material de 1ª e 2ª categorias; (10) escavação, carga e transporte de material de 3ª categoria; e (11) compactação de aterros. A distribuição escolhida para modelagem foi a de Weibull, a fim de linearizar as funções. Em seguida foi aplicada regressão linear simples, com o intuito de serem calculados os parâmetros para cada variável aleatória e, consequentemente, calcular seu valor esperado e sua variância. O modelo encontrado aplicável ao contrato, uma vez que rapidamente pode-se obter uma informação confiável e completa, incluindo qual a margem inferior e superior de valor ao qual o contrato estaria sujeito. Aplicando-se este modelo, acredita-se que a relação com a Contratante poderia ser ainda mais transparente, melhorando a relação de confiabilidade dentro do Contrato.
33

Réseaux de neurones à convolutions pour la segmentation multi structures d'images par résonance magnétique cardiaque

Zotti, Clément January 2018 (has links)
L'imagerie par résonance magnétique (IRM) est une technique d'acquisition d'images qui permet de visualiser les différents tissus du corps humain. Son principe se base sur le moment magnétique des protons des atomes d'hydrogène. Le corps étant principalement composé d'eau et donc d'hydrogène, cela en fait une méthode de choix pour faire de l'imagerie cardiaque. L'IRM est très utilisée en clinique pour observer et diagnostiquer les différentes maladies cardiaques, comme l'infarctus du myocarde, la cardiomyopathie dilatée ou la cardiomyopathie hypertrophique. Dans le cas du coeur, principalement trois structures anatomiques sont étudiées: la cavité du ventricule gauche, la cavité du ventricule droit et le myocarde. Dans ce but, il est nécessaire de faire une segmentation manuelle, semi-automatique ou automatique de l'image IRM. Une fois ces structures segmentées, différents paramètres physiologiques peuvent être calculés pour évaluer la maladie d'un patient. Souvent, les méthodes de segmentation se concentrent sur la segmentation de la cavité du ventricule gauche. Pour les autres structures, la segmentation est principalement faite à la main par un médecin ce qui demande un temps non négligeable (environ 10 à 15 minutes par coeur). Ce mémoire présente une base de données anonymisée d'images cardiaque contenant 150 patients avec différentes maladies cardiaques. Il présente aussi une nouvelle méthode de segmentation automatique des trois structures sans aucune intervention humaine. La méthode se base sur l'apprentissage profond, ce qui en fait une méthode très rapide (180 millisecondes par volume). Pour rendre les segmentations plus fidèles, elle incorpore un terme de contours qui permet d'avoir une segmentation plus précise des contours des structures et une forme a priori qui permet de rendre la segmentation plus près de celle d'un vrai coeur (sans trous ou anatomie impossible). Cette recherche est faite en collaboration avec l'Université de Bourgogne et l'Université de Lyon en France qui ont permis la mise en place de cette base de données cardiaque et la validation des résultats.
34

Análise custo–benefício de manutenções preventivas em linhas de transmissão da Celpe com base no conhecimento a priori de especialistas

FERREIRA, Alison da Costa 09 July 2015 (has links)
Submitted by Isaac Francisco de Souza Dias (isaac.souzadias@ufpe.br) on 2016-04-15T17:20:26Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Alison da Costa Ferreira.pdf: 2408082 bytes, checksum: 66a4abdc60f14807e7d9d13f72d6cd52 (MD5) / Made available in DSpace on 2016-04-15T17:20:26Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Alison da Costa Ferreira.pdf: 2408082 bytes, checksum: 66a4abdc60f14807e7d9d13f72d6cd52 (MD5) Previous issue date: 2015-07-09 / A otimização dos recursos, por serem geralmente escassos e limitados, possibilita que as grandes organizações sejam competitivas, já que os custos atrelados à realização de determinado serviço serão mitigados e atrelados a ganhos de produtividade e qualidade. Para o gestor de manutenção de uma empresa de distribuição de energia elétrica, alocar os recursos financeiros disponíveis para eliminar os defeitos eminentes, considerados potencialmente como ofensores aos principais indicadores operacionais que medem a frequência e duração das falhas, torna-se uma árdua tarefa haja vista que a disponibilidade orçamentária não permite a correção total dos defeitos apontados. Analisar o problema na ótica do custobenefício torna-se uma ótima ferramenta para evidenciar o quanto será gasto na correção e o retorno que se espera nestas ações de manutenção. Além da frequencia de interrupções, o restabelecimento de energia interrompida é visto como algo crucial e necessário para que uma empresa do setor elétrico esteja contida nas metas regulatórias pré-estabelecidas pela Agência Nacional de Energia Elétrica (ANEEL), sendo este o principal benefício do modelo proposto. Este trabalho trata de um estudo de caso envolvendo linhas de transmissão da Companhia Energética de Pernambuco (CELPE) nas quais existe a necessidade de correção dos defeitos existentes que reduzam a probabilidade de indisponibilidade destes ativos, projetando reduções no indicador operacional DEC (Duração Equivalente de Interrupção por Unidade Consumidora) embasados nos valores orçamentários disponíveis. Contudo, a falta de dados, necessários para embasamento do planejamento destas correções, dificulta o uso de modelos estatísticos com base em uma visão frequentista das ocorrências, e por este motivo, o conhecimento a priori dos Inspetores de Linhas de Transmissão torna-se um fator preponderante e determinante para estruturação. Desta forma, o trabalho proposto, além de evidenciar os ganhos já descritos com a utilização do conhecimento a priori de especialistas, permitirá ao gestor da manutenção argumentar quanto aos impactos na falta da realização dos defeitos e promoverá a área de manutenção a patamares mais elevados na organização. / The optimization of resources, because they are generally scarce and limited, enables large organizations to be competitive, since costs linked to the realization of a given service will be mitigated and linked to productivity and quality gains. For the maintenance manager of an electricity distribution company, allocate the financial resources available to eliminate the eminent defects, considered as potentially offenders to the main operational indicators that measure the frequency and duration of failures, it becomes a chore considering the available budget does not allow the total correction of defects pointed out. Analyze the problem from the viewpoint of cost-effectiveness becomes a great tool to show how much will be spent on repair and return you expect these maintenance actions. In addition to the frequency of interruptions, the interrupted power restoration is seen as crucial and necessary for an electrical company is contained in the regulatory targets established in advance by the National Electric Energy Agency (ANEEL), which is the main benefit of the model proposed. This paper is a case study involving transmission lines Energy Company of Pernambuco (CELPE) in which there is a need to correct the existing defects that reduce the probability of unavailability of these assets, projecting reductions in operating indicator DEC (System Average Interruption Duration per Consumer Unit) grounded in the budget amounts available. However, the lack of data necessary for foundation planning of these corrections, hinders the use of statistical models based on a frequentist view of events, and for this reason, a priori knowledge of Transmission Line Inspectors becomes a factor leading and determining structure. Thus, the proposed work, besides highlighting the gains already described with the use of a priori knowledge of experts, the enable maintenance manager argued as the impact in the absence of defects and promote achievement of the service area to higher levels in organization.
35

Tapping into Students' Culturally Informed Prior Knowledge: A Study of Four Instructors Teaching Undergraduate Biology

Woodson, Jolie January 2021 (has links)
While it is well established that pedagogies purposefully linking subject matter to students’ cultural contexts and prior knowledge can help students learn subject matter, little is known about practices for so doing in undergraduate biology courses enrolling substantial numbers of racially, culturally, and otherwise diverse students. This study sought to understand how four biology instructors of primarily Black and Hispanic students enact a form of teaching that draws out and uses knowledge from and about students’ lives—what I refer to as students’ culturally informed prior knowledge—to help students learn key subject-matter ideas in biology. It also examined how instructors managed their efforts to teach in this way and how they portrayed their reasons for so doing. The study derived several insights. One, instructors can connect important subject-matter ideas, in the study of biology, to facets of students’ daily lives, using the latter to advance students’ understanding of the former. Thus, the teaching of college-level biology with knowledge drawn from students’ lives is more than an aspiration. It can and—per my study—does occur. It is then possible to teach college-level biology with knowledge drawn from students’ lives. Two, to enact such teaching, instructors can strive to draw comparisons between topics that are concrete and familiar to students and new subject-matter ideas to make the latter comprehensible to them. Three, instructors can connect subject matter to their students’ physiological experiences, treating students’ thinking about their bodies as a form of prior knowledge. Four, instructors can call students’ commonly accepted yet incomplete or unexamined ideas and beliefs into question to facilitate their learning of subject-matter ideas. Five, instructors’ efforts to teach using knowledge from students’ lives can include planning and forethought—but also improvisation while teaching. Six, a desire to make the subject matter of their course relatable to students can inspire instructors to teach using knowledge from students’ lives. The study recommends (a) changes in institutional policy toward supporting faculty in efforts to teach using knowledge from students’ lives, and (b) future research into teaching of biology and other STEM subjects that takes into account students’ prior knowledge.
36

Inversion of SkyTEM Data Based on Geophysical Logging Results for Groundwater Exploration in Örebro, Sweden

Kindlund, Andrée January 2021 (has links)
Declining groundwater tables threatens several municipalities in Sweden which drinking water is collected from. To ensure a sound drinking water supply, the Geological Survey of Sweden has initiated a groundwater exploration plan. Airborne electromagnetic measurements have seen an uprise in hydrogeophysical projects and have a great potential to achieve high-quality models, especially when combined with drilling data. In 2018, the Geological Survey of Sweden conducted an airborne electromagnetic survey, using the SkyTEM system, in the outskirts of Örebro, Sweden. SkyTEM is a time-domain system and is the most favoured system in hydrogeophysical investigations and was developed especially with hydrogeophysical applications in mind. It is unique by being able to measure interleaved low and high moment current pulses which enables for both high resolution close to surface and increased depth of investigation. During 2019, further drilling in the area including both lithological, and geophysical logging including natural gamma and normal resistivity were carried out. High natural gamma radiation typically indicates content of clay in the rocks. The geology in the area is well explored since the 1940’s when oil was extracted from alum shale in Kvarntorp, located in the survey area. Rocks of sedimentary origin reaches approximately 80 m down until contact with bedrock. Well preserved layers of limestone, shale, alum shale and sandstone are common throughout the area. Combining SkyTEM data with borehole data increases the confidence and generates a model better reflecting the geology in the area. The AarhusInv inversion code was used to perform the modelling, developed by the HydroGeophysical Group (HGG) at Aarhus University, Denmark. Four different models along one single line were generated by using 3, 4, 6 and 30 layers for the reference model in the inversion. Horizontal constraints were applied to all models. Vertical constraints were only applied to the 30 layer model. The survey flight altitude is considered high and in combination with removal of data points being affected by noise, the maximum number of layers in the final model is limited to three. This suggests that the 3 layer model is the most representative model for this survey. The conductive shale seen in the geophysical log is visible in all models at a depth of roughly 40-60 m which is consistent with the geophysical log. No information is retrieved below the shale which concludes that the contact between the sandstone and crystalline rock is not resolved. The lack of information below a highly conductive structure is expected due to shielding effects. This study recommend to carefully assess the flight altitude at quality-control analysis during survey design.
37

Monotonic Probability Distribution : Characterisation, Measurements under Prior Information, and Application / Monotone Wahrscheinlichkeitsverteilung : Charakterisierung, Messverfahren unter Vorinformation und Anwendung

Sans, Wolfgang January 2019 (has links) (PDF)
Statistical Procedures for modelling a random phenomenon heavily depend on the choice of a certain family of probability distributions. Frequently, this choice is governed by a good mathematical feasibility, but disregards that some distribution properties may contradict reality. At most, the choosen distribution may be considered as an approximation. The present thesis starts with a construction of distributions, which uses solely available information and yields distributions having greatest uncertainty in the sense of the maximum entropy principle. One of such distributions is the monotonic distribution, which is solely determined by its support and the mean. Although classical frequentist statistics provides estimation procedures which may incorporate prior information, such procedures are rarely considered. A general frequentist scheme for the construction of shortest confidence intervals for distribution parameters under prior information is presented. In particular, the scheme is used for establishing confidence intervals for the mean of the monotonic distribution and compared to classical procedures. Additionally, an approximative procedure for the upper bound of the support of the monotonic distribution is proposed. A core purpose of auditing sampling is the determination of confidence intervals for the mean of zero-inflated populations. The monotonic distribution is used for modelling such a population and is utilised for the procedure of a confidence interval under prior information for the mean. The results are compared to two-sided intervals of Stringer-type. / Statistische Verfahren zur Modellierung eines zufälligen Phänomens hängen stark von der Wahl einer bestimmter Familie von Wahrscheinlichkeitsverteilungen ab. Oft wird die Auswahl der Verteilung durch das Vorliegen guter mathematischer Handhabbarkeit bestimmt, dabei aber außer Acht gelassen, dass einige Verteilungseigenschaften gegen die Realität verstoßen können und bestenfalls als Näherung aufgefasst werden können. Die vorgelegte Arbeit beginnt mit einer Konstruktion von Verteilungen, die ausschließlich verfügbare Informationen verwenden und im Sinne des Prinzips der maximalen Entropie die größte Unsicherheit beinhalten. Eine dieser Verteilungen ist die monotone Verteilung, die alleine durch ihren Träger und den Mittelwert festgelegt ist. In der klassischen, frequentistischen Statistik existieren zwar Verfahren zur Schätzung von Verteilungsparametern, die Vorinformationen verarbeiten können, sie finden aber kaum Beachtung. Es wird ein allgemeines frequentistisches Verfahren zur Konstruktion kürzester Konfidenzintervalle für Verteilungsparameter unter Vorinformation vorgestellt. Dieses Verfahren wird zur Herleitung von Konfidenzintervallen für das erste Moment der monotonen Verteilung angewendet, und diese mit klassischen Bereichsschätzern verglichen. Außerdem wird ein approximatives Schätzverfahren für die obere Grenze des Trägers der Monotonen Verteilung vorgeschlagen. Ein Hauptziel der Wirtschaftsprüfung ist die Bestimmung von Konfidenzintervalle für Mittelwerte von Grundgesamtheiten zu bestimmen, die viele Nullen enthalten. Die monotone Verteilung geht in die Modellierung einer solchen Grundgesamtheit und in das Verfahren für ein Konfidenzintervall unter Vorinformation zur Schätzung des Mittelwerts ein. Die Ergebnisse werden mit zweiseitigen Intervallen vom Stringer-Typ verglichen.
38

Statistical Failure Prediction with an Account for Prior Information / Statistische Vorhersage von Ausfällen unter Berücksichtigung von Vorinformationen

Kann, Lennart January 2020 (has links) (PDF)
Prediction intervals are needed in many industrial applications. Frequently in mass production, small subgroups of unknown size with a lifetime behavior differing from the remainder of the population exist. A risk assessment for such a subgroup consists of two steps: i) the estimation of the subgroup size, and ii) the estimation of the lifetime behavior of this subgroup. This thesis covers both steps. An efficient practical method to estimate the size of a subgroup is presented and benchmarked against other methods. A prediction interval procedure which includes prior information in form of a Beta distribution is provided. This scheme is applied to the prediction of binomial and negative binomial counts. The effect of the population size on the prediction of the future number of failures is considered for a Weibull lifetime distribution, whose parameters are estimated from censored field data. Methods to obtain a prediction interval for the future number of failures with unknown sample size are presented. In many applications, failures are reported with a delay. The effects of such a reporting delay on the coverage properties of prediction intervals for the future number of failures are studied. The total failure probability of the two steps can be decomposed as a product probability. One-sided confidence intervals for such a product probability are presented. / Vorhersageintervalle werden in vielen industriellen Anwendungen benötigt. In Massenproduktionen entstehen regelmäßig kleine Untergruppen von unbekannter Größer, welche ein anderes Lebensdauerverhalten als die übrige Population besitzen. Eine Risikoeinschätzung für eine solche Untergruppe besteht aus zwei Schritten: i) der Schätzung der Größe dieser Untergruppe und ii) der Schätzung des Lebensdauerverhaltens dieser Untergruppe. Diese Arbeit behandelt diese beiden Schritte. Eine effiziente Methode zur Schätzung der Größe der Untergruppe wird vorgestellt und mit anderen Methoden verglichen. Vorhersageintervalle unter Vorinformation in Form einer Betaverteilung werden dargelegt. Das Schema wird für die Vorhersage binomialer und negativ binomialer Zufallsvariablen angewandt. Der Effekt der Populationsgröße auf die Vorhersage der Anzahl von zukünftigen Ausfällen wird für eine Weibull Verteilung betrachtet, deren Parameter auf Basis von zensierten Felddaten geschätzt werden. Methoden um Vorhersageintervalle bei unbekannter Populationsgröße zu bestimmen werden dargelegt. In vielen Anwendungen werden Ausfälle mit einem Verzug gemeldet. Die Effekte eines solchen Meldeverzugs auf die Überdeckungseigenschaften von Vorhersageintervallen für die Anzahl an zukünftigen Ausfällen werden untersucht. Die Gesamtausfallwahrscheinlichkeit aus den zwei Schritten kann in eine Produktwahrscheinlichkeit zerlegt werden. Einseitige Konfidenzintervalle für eine solche Produktwahrscheinlichkeit werden dargelegt.
39

L'apriorisme dans les termes de la science expérimentale

Dufault, Wilfrid J. 06 March 2019 (has links)
Montréal Trigonix inc. 2018
40

Rôle des croyances a priori et de la contiguïté temporelle dans une tâche de raisonnement causal

Walsh, Sébastien 19 April 2018 (has links)
Afin de s'adapter au monde qui l’entoure, l’humain doit comprendre les relations de causes à effet présentes dans son environnement. Bien que des auteurs aient tenté d'expliquer théoriquement cette habileté, plusieurs questions subsistent quant au rôle des nombreux facteurs impliqués dans ce type de raisonnement. Cette thèse explore le rôle et l’interaction de deux facteurs peu étudiés ensemble dans la littérature, soit les croyances a priori et la contiguïté temporelle de la cause et l’effet. Notamment, les résultats obtenus par plusieurs études ne confirment pas systématiquement une proposition populaire dans la littérature pour décrire l'interaction entre ces facteurs, à savoir que les gens devraient percevoir des liens de causalité plus forts lorsque le délai observé entre des évènements concorde avec le délai attendu entre ceux-ci. La présente thèse postule que cette proposition est probablement valide, mais nécessite des ajustements méthodologiques afin d'être confirmée empiriquement. Ainsi, les participants de cette thèse ont été invités à évaluer un lien causal potentiel entre des évènements à partir de données présentées sous une forme novatrice, soit les tableaux-synthèses. Les tableaux indiquent le degré d'association et la contiguïté temporelle des évènements à évaluer. De plus, les attentes des participants sont manipulées grâce à des scénarios présentés au début de l'expérience. L’Expérience 1 montre que, en l’absence de toute suggestion a priori, la force du lien causal perçue entre la cause et l’effet par les participants diminue lorsque la durée du délai entre cause et effet augmente. Toutefois, cet effet est contrecarré lorsque la présence d'un délai entre la cause et l'effet est suggérée a priori. L’Expérience 2 teste l’effet de la concordance de la durée du délai suggérée et de la durée du délai présente dans les données. Les résultats montrent que la force causale perçue par les participants est plus élevée lorsque les durées des délais suggérées et observées sont semblables, alors que la force causale perçue est plus faible si les durées sont différentes. Les implications théoriques et pratiques de ces résultats sont abordées en lien avec un modèle d’architecture cognitive récent du raisonnement causal. / To understand the world he is living in, a human being needs to understand the causal relations that are present in his environment. Even though some researchers recently tried to describe and modelize this ability, the role of the numerous factors implied in this kind of reasoning has yet to be defined more thoroughly. Thus, this thesis aims to explore the interaction between two factors that are not generally studied together, i.e. time contiguity between a cause and its effect and someone’s a priori beliefs. Notably, it is commonly believed that someone should perceive a stronger causal link between a cause and an effect when the time lap between the events is in accordance with his expectations. Unfortunately, results from the literature fail to systematically confirm such a claim. This thesis asserts that this proposition would be empirically confirmed, given that a new methodology is used to test it. Consequently, participants in the present study were asked to judge the strength of a potential causal link between the use of an insecticide and the change of color of the leaves of palm trees. The presentation of information at the beginning of the experience was used to manipulate participant's expectations. The information to be evaluated was presented in a table format. These tables illustrate how many trees, vaporized or not with the insecticide, underwent a change of color in the 5 days following the vaporization. The tables also indicate at what moment, for each affected trees, the change occurred. Experiment 1 shows that, when no expectations are suggested to participants, the strength of the perceived causal link between two events decreases when the time lap between these events increases. However, this phenomenon does not occur if the presence of a delay is suggested at the beginning. Experiment 2 explores all the possible interactions between time contiguity of the events and a priori expectations. It reveals that the perceived strength of the causal link is systematically stronger when the observed time lap between the events is in accordance with someone’s expectation about this time lap, as opposed to non-accordant time laps. These results are discussed in the light of a recent architectural model of causal reasoning.

Page generated in 0.0273 seconds