• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 13
  • 12
  • 10
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 182
  • 31
  • 25
  • 25
  • 20
  • 16
  • 16
  • 16
  • 16
  • 16
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Essays on strategic voting and political influence

Vlaseros, Vasileios January 2014 (has links)
Chapter 1 : I attempt a detailed literature review on the passage from the probabilistic versions of the Condorcet Jury Theorem to models augmented by the concept of strategic agents, including both theoretical and relevant empirical work. In the first part, I explore the most influential relevant game theoretic models and their main predictions. In the second part, I review what voting experiments have to say about these predictions, with a brief mention of the experiments' key methodological aspects. In the final part, I provide with an attempt to map the recent strategic voting literature in terms of structure and scope. I close with a philosophical question on the exogeneity of a "correct" choice of a voting outcome, which is inherent in the current strategic voting literature. Chapter 2 : I develop a two stage game with individually costly political action and costless voting on a binary agenda where, in equilibrium, agents rationally cast honest votes in the voting stage. I show that a positive but sufficiently low individual cost of political action can lead to a loss in aggregate welfare for any electorate size. When the individual cost of political action is lower than the signalling gain, agents will engage in informative political action. In the voting stage, since everyone's signal is revealed, agents will unanimously vote for the same policy. Therefore, the result of the ballot will be exactly the same as the one without prior communication, but with the additional aggregate cost of political action. However, when agents have heterogeneous prior beliefs, society is large and the state of the world is sufficiently uncertain, a moderate individual cost of political action can induce informative collective action of only a subset of the members of society, which increases ex ante aggregate welfare relative to no political action. The size of the subset of agents engaging in collective action depends on the dispersion of prior opinions. Chapter 3 : This chapter shows theoretically that hearing expert opinions can be a double-edged sword for decision making committees. We study a majoritarian voting game of common interest where committee members receive not only private information, but also expert information that is more accurate than private information and observed by all members. We identify three types of equilibria of interest, namely i) the symmetric mixed strategy equilibrium where each member randomizes between following the private and public signals should they disagree; ii) the asymmetric pure strategy equilibrium where a certain number of members always follow the public signal while the others always follow the private signal; and iii) a class of equilibria where a supermajority and hence the committee decision always follow the expert signal. We find that in the first two equilibria, the expert signal is collectively taken into account in such a way that it enhances the efficiency (accuracy) of the committee decision, and a fortiori the CJT holds. However, in the third type of equilibria, private information is not reflected in the committee decision and the efficiency of committee decision is identical to that of public information, which may well be lower than the efficiency the committee could achieve without expert information. In other words, the introduction of expert information might reduce efficiency in equilibrium. Chapter 4 : In this chapter we present experimental results on the theory of the previous chapter. In the laboratory, too many subjects voted according to expert information compared to the predictions from the efficient equilibria. The majority decisions followed the expert signal most of the time, which is consistent with the class of obedient equilibria mentioned in the previous chapter. Another interesting finding is the marked heterogeneity in voting behaviour. We argue that the voters' behaviour in our data can be best described as that in an obedient equilibrium where a supermajority (and hence the decision) always follow the expert signal so that no voter is pivotal. A large efficiency loss manifests due to the presence of expert information when the committee size was large. We suggest that it may be desirable for expert information to be revealed only to a subset of committee members. Finally, in the Appendix we describe a new alternative method for producing the signal matrix of the game. Chapter 5 : There is a significant gap between the theoretical predictions and the empirical evidence about the efficiency of policies in reducing crime rates. This chapter argues that one important reason for this is that the current literature of economics of crime overlooks an important hysteresis effect in criminal behaviour. One important consequence of hysteresis is that the effect on an outcome variable from positive exogenous variations in the determining variables has a different magnitude from negative variations. We present a simple model that characterises hysteresis in both the micro and macro levels. When the probability of punishment decreases, some law abiding agents will find it more beneficial to enter a criminal career. If the probability of punishment returns to its original level, a subset of these agents will continue with their career in crime. We show that, when crime choice exhibits weak hysteresis at the individual level, crime rate in a society consisted from a continuum of agents that follows any non-uniform distribution will exhibit strong hysteresis. Only when punishment is extremely severe the effect of hysteresis ceases to exist. The theoretical predictions corroborate the argument that policy makers should be more inclined to set pre-emptive policies rather than mitigating measures.
42

Le critère de démarcation de Karl R. Popper et son applicabilité / The Karl R. Popper's criterion of demarcation and its applicability

Michel-Bechet, Jacques 13 May 2013 (has links)
La réfutabilité de Karl Popper (1902-1994) définit à la fois la norme de la connaissance scientifique et se présente comme le critère du caractère empirique de toute théorie scientifique. La thèse rend compte de l’ambigüité d’une épistémologie qui s’ancre dans la logique potentielle tout en prétendant à l’effectivité pratique. Il est impossible avec un tel critère de statuer sur la scientificité de disciplines aussi diverses que le marxisme, la psychanalyse, la théorie de l’évolution, l’astrologie, étudiées par Popper et exclues du domaine des sciences pour absence de prédictibilité. La thèse met aussi en évidence que, bien que très influente en biologie, l’épistémologie normative de Popper n’a jamais été vraiment appliquée, même par ses épigones tel Jacques Monod, et n’est pas applicable. Les raisons de ces échecs doivent être recherchées non seulement dans la logique potentielle, mais aussi dans le modèle déductif-nomologique, au fondement du critère et qui deviendra la norme de toute science empirique. Si le modèle D-N d’explication, formalisé plus tard par Carl Hempel, peut servir à la construction du modus operandi de la réfutation en physique, il ne peut prétendre à l’opérabilité dans les disciplines où l’existence de lois demeure problématique et la notion de prédiction plurielle comme en biologie. Enfin la thèse, s’appuyant sur l’analyse critique de l’épistémologie poppérienne, propose une typologie des prédictions, précise la spécificité des énoncés biologiques et envisage un autre critère de scientificité qui prenne davantage en compte la science en action. / Karl Popper's falsifiability (1902-1994) defines at the same time the standard of scientific knowledge, and is presented in the form of a criterion of the empirical character of any scientific theory. The thesis reflects the ambiguity of an epistemology that is grounded in potential logic while aiming practical effectivity. It is impossible with such a criterion to rule on the scientificity of disciplines as diverse as Marxism, psychoanalysis, the theory of the evolution, astrology, studied by Popper and excluded from the field of sciences for lack of predictability. The thesis also highlights that, although very influential in biology, Popper’s normative epistemology has never really been applied, even by his followers like Jacques Monod, and is not applicable. The reasons for these failures must be investigated not only in potential logic, but also in the deductive-nomological model, the basis of the criterion, and which will become the norm in empirical science. If D-N model of explanation, formalized later by Carl Hempel, can be used for the construction of the modus operandi of the falsification in physics, it cannot be applied in the disciplines where the existence of laws remains problematic and where the concept of prediction is plural, live in biology. Finally the thesis, based on a critical analysis of Popper’s epistemology, proposes a typology of predictions, specifies biological statements and tries to look at another criterion of scientificity that takes greater account of science in action.
43

Geração de mapas de ambiente de rádio em sistemas de comunicações sem fio com incerteza de localização. / Generation of radio environment maps in wireless communications systems with location uncertainly.

Silva Junior, Ricardo Augusto da 17 December 2018 (has links)
A geração e o uso dos mapas de ambiente de rádio (REM - Radio Environment Map) em sistemas de comunicações sem fio vêm sendo alvo de pesquisas recentes na literatura científica. Dentre as possíveis aplicações, o REM fornece informações importantes para os processos de predição e otimização de cobertura em sistemas de comunicações sem fio, pois é baseado em medidas coletadas diretamente da rede. Neste contexto, a geração do REM depende do processamento das medidas e suas localizações para a construção dos mapas, por meio de predições espaciais. Entretanto, a incerteza de localização das medidas coletadas pode degradar a acurácia das predições de forma significativa e, consequentemente, impactar as posteriores decições baseadas no REM. Este trabalho aborda o problema de geração do REM de forma mais realística, formulando um modelo de predição espacial que introduz erros de localização no ambiente de rádio de um sistema de comunicação sem fio. As investigações mostram que os impactos provocados pela incerteza de localização na geração do REM são significativos, especialmente nas técnicas de estimação utilizadas para a aprendizagem de parâmetros do modelo de predição espacial. Com isso, uma técnica de predição espacial é proposta e utiliza ferramentas da área geoestatística para superar os efeitos negativos causados pela incerteza de localização nas medidas. Simulações computacionais são desenvolvidas para a avaliação de desempenho das principais técnicas de predição no contexto de geração do REM, considerando o problema da incerteza de localização. Os resultados de simulação da técnica proposta são promissores e mostram que levar em conta a distribuição estatística dos erros de localização pode resultar em predições com maior acurácia para a geração do REM. A influência de diferentes aspectos da modelagem do ambiente de rádio também é analisada e reforçam a ideia de que a aprendizagem de parâmetros do ambiente de rádio tem um papel importante na acurácia das predições espaciais, que são fundamentais para a geração confiável do REM. Finalmente, um estudo experimental do REM é realizado por meio de uma campanha de medidas, permitindo explorar o desempenho dos algoritmos de aprendizagem de parâmetros e predições desenvolvidos neste trabalho. / The generation and use of radio environment maps (REM) in wireless systems has been the subject of recent research in the scientific literature. Among the possible applications, the REM provides important information for the coverage predicfition and optimization processes in wireless systems, since it is based on measurements collected directly on the network. In this context, the REM generation process depends on the processing of the measurements and their locations for the construction of the maps through spatial predictions. However, the location uncertainty related to the measurements collected can signicantly degrade the accuracy of the spatial predictions and, consequently, impact the decisions based on REM. This work addresses the problem of the REM generation in a more realistic way, through the formulation of a spatial prediction model that introduces location errors in the radio environment of a wireless communication system. The investigations show that the impacts of the location uncertainty on the REM generation are significant, especially in the estimation techniques used to learn the parameters of the spatial prediction model. Thus, a spatial prediction technique is proposed, based on geostatistical tools, to overcome the negative effects caused by the location uncertainty of the REM measurements. Computational simulations are developed for the performance evaluation of the main prediction techniques in the context of REM generation, considering the problem of location uncertainty. The simulation results of the proposed technique are promising and show that taking into account the statistical distribution of location errors can result in more accurate predictions for the REM generation process. The influence of different aspects of the radio environment modeling is also analyzed and reinforce the idea that the learning of radio environment parameters plays an important role in the accuracy of spatial predictions, which are fundamental for the reliable REM generation. Finally, an experimental study is carried out through a measurement campaign with the purpose of generating the REM in practice and to explore the performance of the learning and prediction algorithms developed in this work.
44

Extrapolação a partir de padrões seriais de estímulos é prejudicada por danos no tálamo anteroventral, em ratos / Extrapolation of serial stimulus patterns is disrupted following selective damage to the anteroventral thalamus in rats

Silva, Daniel Giura da 05 June 2017 (has links)
De acordo com Gray (1982) o sistema nervoso monitora o ambiente e o comportamento continuamente, sendo capaz de inibir o comportamento em curso quando se depara com novidades ou com discrepâncias entre expectativas geradas com base em memórias de regularidades passadas e a informação sensorial presente, de modo a explorar a fonte de novidade ou discrepância e, assim, obter informações que possibilitem gerar previsões melhores no futuro. O sistema septo-hipocampal compararia estímulos presentes com informações antecipadas (ou previstas). Tal sistema envolve um comparador, o subículo, que receberia informações do presente através de aferências neocorticais, via córtex entorrinal, e informações \"previstas\" geradas em um \"circuito gerador de previsões\". Gray (1982) propôs que esse circuito gerador de previsões inclui o subículo, os corpos mamilares, o tálamo anteroventral, o córtex cingulado e, novamente, o subículo. Destas estruturas, o tálamo anteroventral encontra-se em posição privilegiada, do ponto de vista hodológico e experimental, para investigar este postulado circuito gerador de previsões. O objetivo do presente trabalho foi investigar o efeito da lesão seletiva no tálamo anteroventral, pela aplicação tópica de ácido N-metil-D-aspártico (NMDA), sobre a habilidade de ratos extrapolarem a partir de padrões seriais de estímulos. Tampão fosfato foi aplicado em sujeitos controle. Ratos da linhagem Wistar, machos, foram treinados a correr em uma pista reta para receberem reforço ao seu final. Em cada sessão (uma sessão por dia), os animais correram 4 tentativas sucessivas, recebendo quantidades diferentes de sementes de girassol em cada tentativa. No padrão monotônico decrescente os sujeitos receberam 14, 7, 3 e 1 sementes de girassol, enquanto os sujeitos expostos ao padrão não-monotônico receberam 14, 3, 7 e 1 sementes de girassol. Os animais foram treinados ao longo de 31 sessões. No 32° dia do experimento, uma quinta tentativa, nunca antes experienciada pelos animais, foi adicionada à sessão. Como esperado, os tempos de corrida na quinta tentativa dos animais controle expostos ao padrão monotônico decrescente foram substancialmente maiores se comparados aos animais controle expostos ao padrão não-monotônico, indicando a ocorrência de extrapolação. Em contraste, os sujeitos lesados expostos ao padrão monotônico não exibiram esse aumento de latência na quinta corrida, indicando que esses animais não extrapolaram. Em conclusão, os resultados indicam que extrapolação a partir de padrões seriais de estímulos é prejudicada pela lesão seletiva do tálamo anteroventral / According to Gray (1982) the brain continuously monitors environment and behavior, being capable of inhibiting ongoing behaviors when facing novelty or detecting discrepancies involving predictions generated from memories of past regularities and the actual sensorial information, in order to explore the source of novelty and/or discrepancy, and thus to gather information for generating better predictions in the future. The septo-hippocampal system compares anticipated and present information. The comparator would be the subiculum. This brain structure would receive present information from neocortical afferents, via the entorhinal cortex, and expected information from a \"generator of predictions system\" including the subiculum, mammillary bodies, anteroventral thalamus, cingulate cortex and, again, the subiculum. The anteroventral thalamus is in a privileged position, both hodologically and experimentally, to allow investigation of this postulated generator of predictions system. This study investigated the effect of selective damage to the anteroventral thalamus, by topical application of N-Methyl-D-Aspartic acid (NMDA), on the ability of rats to extrapolate relying on serial stimulus patterns. Control subjects were injected with phosphate buffer. Male Wistar rats were trained to run through a straight alleyway to get rewarded. In each session (one session per day) the animal run four successive trials, one immediately after the other, receiving different amounts of sunflower seeds in each trial. While subjects exposed to the monotonic decremental schedule received 14, 7, 3, 1 sunflower seeds along trials, subjects exposed to the non-monotonic schedule received 14, 3, 7, 1 sunflower seeds. Subjects were trained along 31 sessions. Then, on the 32nd testing session, a fifth trial never experienced before by all subjects was included immediately after the fourth trial. As expected, running times on the fifth trial for Control subjects exposed to the monotonic schedule were significantly longer as compared to the corresponding scores of Control subjects exposed to the non-monotonic schedule, thus indicating the occurrence of extrapolation. In contrast, lesioned subjects exposed to the monotonic schedule did not exhibit this increase in running times on the fifth trial thus indicating that these subjects did not extrapolate. In conclusion, results indicate that extrapolation relying on serial stimulus patterns is disrupted following selective damage to the anteroventral thalamus
45

Visual uncertainty in serial dependence : facing noise / Visuell osäkerhet vid seriellt beroende : effekt av brus

Lidström, Anette January 2019 (has links)
Empirical evidence suggests that the visual system uses prior visual information to predict the future state of the world. This is believed to occur through an information integration mechanism known as serial dependence. Current perceptions are influenced by prior visual information in order to create perceptual continuity in an everchanging noisy environment. Serial dependence has been found to occur for both low-level stimuli features (e.g., numerosity, orientation) and high-level stimuli like faces. Recent evidence indicates that serial dependence for low-level stimuli is affected by current stimulus reliability. When current stimuli are low in reliability, the perceptual influence from previously viewed stimuli is stronger. However, it is not clear whether stimulus reliability also affects serial dependence for high-level stimuli like faces. Faces are highly complex stimuli which are processed differently from other objects. Additionally, face perception is suggested to be especially vulnerable to external visual noise. Here, I used regular and visually degraded face stimuli to investigate whether serial dependence for faces is affected by stimulus reliability. The results showed that previously viewed degraded faces did not have a very strong influence on perceptions of currently viewed regular faces. In contrast, when currently viewed faces were degraded, the perceptual influence from previously viewed regular faces was rather strong. Surprisingly, there was a quite strong perceptual influence from previously viewed faces on currently viewed faces when both faces were degraded. This could mean that the effect of stimulus reliability in serial dependence for faces is not due to encoding disabilities, but rather a perceptual choice.
46

A multiple-perspective approach for insider-threat risk prediction in cyber-security

Elmrabit, Nebrase January 2018 (has links)
Currently governments and research communities are concentrating on insider threat matters more than ever, the main reason for this is that the effect of a malicious insider threat is greater than before. Moreover, leaks and the selling of the mass data have become easier, with the use of the dark web. Malicious insiders can leak confidential data while remaining anonymous. Our approach describes the information gained by looking into insider security threats from the multiple perspective concepts that is based on an integrated three-dimensional approach. The three dimensions are human issue, technology factor, and organisation aspect that forms one risk prediction solution. In the first part of this thesis, we give an overview of the various basic characteristics of insider cyber-security threats. We also consider current approaches and controls of mitigating the level of such threats by broadly classifying them in two categories: a) technical mitigation approaches, and b) non-technical mitigation approaches. We review case studies of insider crimes to understand how authorised users could harm their organisations by dividing these cases into seven groups based on insider threat categories as follows: a) insider IT sabotage, b) insider IT fraud, c) insider theft of intellectual property, d) insider social engineering, e) unintentional insider threat incident, f) insider in cloud computing, and g) insider national security. In the second part of this thesis, we present a novel approach to predict malicious insider threats before the breach takes place. A prediction model was first developed based on the outcomes of the research literature which highlighted main prediction factors with the insider indicator variables. Then Bayesian network statistical methods were used to implement and test the proposed model by using dummy data. A survey was conducted to collect real data from a single organisation. Then a risk level and prediction for each authorised user within the organisation were analysed and measured. Dynamic Bayesian network model was also proposed in this thesis to predict insider threats for a period of time, based on data collected and analysed on different time scales by adding time series factors to the previous model. Results of the verification test comparing the output of 61 cases from the education sector prediction model show a good consistence. The correlation was generally around R-squared =0.87 which indicates an acceptable fit in this area of research. From the result we expected that the approach will be a useful tool for security experts. It provides organisations with an insider threat risk assessment to each authorised user and also organisations can discover their weakness area that needs attention in dealing with insider threat. Moreover, we expect the model to be useful to the researcher's community as the basis for understanding and future research.
47

Statistical Analysis of Fastener Vibration Life Tests

Cheatham, Christopher 01 November 2007 (has links)
This thesis presents methods to statistically quantify data from fastener vibration life tests. Data from fastener vibration life tests with secondary locking features of threaded inserts is used. Threaded inserts in three different configurations are examined: no locking feature, prevailing torque locking feature, and adhesive locking feature. Useful composite plots were developed by extracting minimum preloads versus cycles from test data. Minimum preloads were extracted due to the overlapping of varying test data and because the minimum preload is of most interest in such tests. In addition to composite plots, descriptive statistics of the samples were determined including mean, median, quartiles, and extents. These descriptive statistics were plotted to illustrate variability within a sample as well as variability between samples. These plots also reveal that characteristics of loosening for a sample, such as preload loss and rates of preload loss, are preserved when summarizing such tests. Usually fastener vibration life tests are presented and compared with one test sample, which is why statistically quantifying them is needed and important. Methods to predict the sample population have been created as well. To predict populations, tests to determine the distribution of the sample, such as probability plots and probability plot correlation coefficient, have been conducted. Once samples were determined to be normal, confidence intervals were created for test samples, which provides a range of where the population mean should lie. It has been shown that characteristics of loosening are preserved in the confidence intervals. Populations of fastener vibration life tests have never before been presented or created. The evaluation of loosening has been conducted for fastener vibration life tests in the past with plots of one test sample; however, in this work statistically quantified results of multiple tests were used. This is important because evaluating loosening with more than one test sample can determine variation between tests. It has been found that secondary locking features do help reduce the loss of preload. The prevailing torque secondary locking feature is found to be more effective as preload is lost. The best secondary locking feature has been found to be the adhesive.
48

Generative Adversarial Networks for Lupus Diagnostics

Pradeep Periasamy (7242737) 16 October 2019 (has links)
The recent boom of Machine Learning Network Architectures like Generative Adversarial Networks (GAN), Deep Convolution Generative Adversarial Networks (DCGAN), Self Attention Generative Adversarial Networks (SAGAN), Context Conditional Generative Adversarial Networks (CCGAN) and the development of high-performance computing for big data analysis has the potential to be highly beneficial in many domains and fittingly in the early detection of chronic diseases. The clinical heterogeneity of one such chronic auto-immune disease like Systemic Lupus Erythematosus (SLE), also known as Lupus, makes it difficult for medical diagnostics. One major concern is a limited dataset that is available for diagnostics. In this research, we demonstrate the application of Generative Adversarial Networks for data augmentation and improving the error rates of Convolution Neural Networks (CNN). Limited Lupus dataset of 30 typical ’butterfly rash’ images is used as a model to decrease the error rates of a widely accepted CNN architecture like Le-Net. For the Lupus dataset, it can be seen that there is a 73.22% decrease in the error rates of Le-Net. Therefore such an approach can be extended to most recent Neural Network classifiers like ResNet. Additionally, a human perceptual study reveals that the artificial images generated from CCGAN are preferred to closely resemble real Lupus images over the artificial images generated from SAGAN and DCGAN by 45 Amazon MTurk participants. These participants are identified as ’healthcare professionals’ in the Amazon MTurk platform. This research aims to help reduce the time in detection and treatment of Lupus which usually takes 6 to 9 months from its onset.
49

Measurements Versus Predictions for the Static and Dynamic Characteristics of a Four-pad Rocker-pivot, Tilting-pad Journal Bearing

Tschoepe, David 1987- 14 March 2013 (has links)
Measured and predicted static and dynamic characteristics are provided for a four-pad, rocker-pivot, tilting-pad journal bearing in the load-on-pad and load-between-pad orientations. The bearing has the following characteristics: 4 pads, .57 pad pivot offset, 0.6 L/D ratio, 60.33 mm (2.375in) pad axial length, 0.08255 mm (0.00325 in) radial clearance in the load-on-pad orientation, and 0.1189 mm (0.00468 in) radial clearance in the load-between-pad orientation. Tests were conducted on a floating test bearing design with unit loads ranging from 0 to 2903 kPa (421.1 psi) and speeds from 6.8 to 13.2 krpm. For all rotor speeds, hot-clearance measurements were taken to show the reduction in bearing clearance due to thermal expansion of the shaft and pads during testing. As the testing conditions get hotter, the rotor, pads, and bearing expand, decreasing radial bearing clearance. Hot-clearance measurements showed a 16-25% decrease in clearance compared to a clearance measurement at room temperature. For all test conditions, dynamic tests were performed over a range of excitation frequencies to obtain complex dynamic stiffness coefficients as a function of frequency. The direct real dynamic stiffness coefficients were then fitted with a quadratic function with respect to frequency. From the curve fit, the frequency dependence was captured by including a virtual-mass matrix [M] to produce a frequency independent [K][C][M] model. The direct dynamic stiffness coefficients for the load-on-pad orientation showed significant orthotropy, while the load-between-pad did not. The load-between-pad showed slight orthotropy as load increased. Experimental cross-coupled stiffness coefficients were measured in both load orientations, but were of the same sign and significantly less than direct stiffness coefficients. In both orientations the imaginary part of the measured dynamic stiffness increased linearly with increasing frequency, allowing for frequency independent direct damping coefficients. Rotordynamic coefficients presented were compared to predictions from two different Reynolds-based models. Both models showed the importance of taking into account pivot flexibility and different pad geometries (due to the reduction in bearing clearance during testing) in predicting rotordynamic coefficients. If either of these two inputs were incorrect, then predictions for the bearings impedance coefficients were very inaccurate. The main difference between prediction codes is that one of the codes incorporates pad flexibility in predicting the impedance coefficients for a tilting-pad journal bearing. To look at the effects that pad flexibility has on predicting the impedance coefficients, a series of predictions were created by changing the magnitude of the pad's bending stiffness. Increasing the bending stiffness used in predictions by a factor of 10 typically caused a 3-11% increase in predicted Kxx and Kyy, and a 10-24% increase in predicted Cxx and Cyy. In all cases, increasing the calculated bending stiffness from ten to a hundred times the calculated value caused slight if any change in Kxx, Kyy, Cxx, and Cyy. For a flexible pad an increase in bending stiffness can have a large effect on predictions; however, for a more rigid pad an increase in pad bending stiffness will have a much lesser effect. Results showed that the pad's structural bending stiffness can be an important factor in predicting impedance coefficients. Even though the pads tested in this thesis are extremely stiff, changes are still seen in predictions when the magnitude of the pad?s bending stiffness is increased, especially in Cxx, and Cyy. The code without pad flexibility predicted Kxx and Kyy much more accurately than the code with pad flexibility. The code with pad flexibility predicts Cxx more accurately, while the code without pad flexibility predicted Cyy more accurately. Regardless of prediction Code used, the Kxx and Kyy were over-predicted at low loads, but predicted more accurately as load increased. Cxx, and Cyy were modeled very well in the load-on-pad orientation, while slightly overpredicted in the load-between-pad orientation. For solid pads, like the ones tested here, both codes do a decent job at predicting impedance coefficients
50

Navigating in a dynamic world : Predicting the movements of others

Thorarinsson, Johann Sigurdur January 2009 (has links)
<p>The human brain is always trying to predict ahead in time. Many say that it is possible to take actions only based on internal simulations of the brain. A recent trend in the field of Artificial Intelligence is to provide agents with an “inner world” or internal simulations. This inner world can then be used instead of the real world, making it possible to operate without any inputs from the real world.</p><p>This final year project explores the possibility to navigate collision-free in a dynamic environment, using only internal simulation of sensor input instead of real input. Three scenarios will be presented that show how internal simulation operates in a dynamic environment. The results show that it is possible to navigate entirely based on predictions without a collision.</p>

Page generated in 0.0199 seconds