• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 216
  • 129
  • 35
  • 19
  • 15
  • 14
  • 13
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 561
  • 88
  • 78
  • 71
  • 66
  • 48
  • 45
  • 38
  • 37
  • 36
  • 36
  • 36
  • 34
  • 34
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Spectral Selective Photothermal Materials and Energy Applications

Lin, Jou January 2022 (has links)
No description available.
282

Doftar saltad mat mer än osaltad? : En sensorisk undersökning om tillsatt salt ökar upplevelsen av doft imat. / Does adding salt increase the aroma in the food? : A sensory investigation into whether added salt increases the experience of aroma in food.

Öbrink, Lovisa January 2023 (has links)
No description available.
283

Does Response Modality Influence Conflict? Modelling Vocal and Manual Response Stroop Interference

Fennell, Alex 16 June 2017 (has links)
No description available.
284

Effects of Signal Modality and Event Asynchrony on Vigilance Performance and Cerebral Hemovelocity

Shaw, Tyler H. 02 October 2006 (has links)
No description available.
285

Modality dominance in young children: the underlying mechanisms and broader implications

Napolitano, Amanda C. 15 November 2006 (has links)
No description available.
286

Audiovisual Cross-Modality in Virtual Reality

Sandberg Bröms, Samuel, Hansen, Emil January 2022 (has links)
What happens when we see an object of a certain material but the sounds that it makes comes from another material? Whilst it is an interesting question, it is an area that is under researched. Though there has been some previous research in the field the visuals have been represented using textures on simple shapes like cubes or spheres. Since this is not how humans experience materials in the real world there is a possibility that the research that has been done is not generalizable or ecologically valid. We wanted to see what would happen if this type of test was performed using 3D models that looked like real-life objects that most people would be familiar with. In order to test this, we gathered impact sounds and 3D models to represent nine different materials and created a program in virtual reality that allowed us to test all the possible combinations of sounds and visuals. These tests were performed with 15 participants who selected which material they believed each audiovisual combination represented. Our results showed a higher tendency to rely on audio cues for material perception compared to previous tests. This is interesting since we increased the visual fidelity while the quality of the audio was comparable to the previous tests. One theory is that the increase in visual fidelity makes the visuals so much clearer that participants started focusing more on trying to understand the audio. / Vad händer när vi ser ett föremål av ett visst material men ljuden som det gör kommer från ett annat material? Även om det är en intressant fråga, är det ett område som är underforskat. Även om det har gjorts en del tidigare forskning på området har det visuella representerats med hjälp av texturer på enkla former som kuber eller sfärer. Eftersom det inte är så människor upplever material i den verkliga världen finns det en möjlighet att den forskning som har gjorts inte är generaliserbar eller ekologiskt giltig. Vi ville se vad som skulle hända om den här typen av test utfördes med 3Dmodeller som såg ut som verkliga objekt som de flesta skulle känna till. För att testa detta samlade vi in ljud från kollisioner och 3Dmodeller för att representera nio olika material och skapade ett program i virtuell verklighet som gjorde att vi kunde testa alla möjliga kombinationer av ljud och bild. Dessa tester utfördes med 15 deltagare som valde vilket material de trodde att varje audiovisuell kombination representerade. Våra resultat visade en högre tendens att förlita sig på ljudet för uppfattning av materialet jämfört med tidigare tester. Detta är intressant eftersom vi ökade den visuella detaljrikedomen medan ljudets kvalité var jämförbart med de tidigare testerna. En teori är att ökningen av visuell detaljrikedom gör det visuella så mycket tydligare att deltagarna började fokusera mer på att försöka förstå ljudet.
287

Development of a Silicon Photomultiplier Based Gamma Camera

Tao, Ashley T. 04 1900 (has links)
<p>Dual modality imaging systems such as SPECT/CT have become commonplace in medical imaging as it aids in diagnosing diseases by combining anatomical images with functional images. We are interested in developing a dual modality imaging system combining SPECT and MR imaging because MR does not require any ionizing radiation to image anatomical structures and it is known to have superior soft tissue contrast to CT. However, one of the fundamental challenges in developing a SPECT/MR system is that traditional gamma cameras with photomultiplier tubes are not compatible within magnetic fields. New development in solid state detectors has led to the silicon photomultiplier (SiPM), which is insensitive to magnetic fields.</p> <p>We have developed a small area gamma camera with a tileable 4x4 array of SiPM pixels coupled with a CsI(Tl) scintillation crystal. A number of simulated gamma camera geometries were performed using both pixelated and monolithic scintillation crystals. Several event positioning algorithms were also investigated as an alternative to conventional Anger logic positioning. Simulations have shown that we can adequately resolve intrinsic spatial resolution down to 1mm, even in the presence of noise. Based on the results of these simulations, we have built a prototype SiPM system comprised of 16 detection channels coupled to discrete crystals. A charge sensitive preamplifier, pulse height detection circuit and a digital acquisition system make up our pulse processing components in our gamma camera system. With this system, we can adequately distinguish each crystal element in the array and have obtained an energy resolution of 30±1 (FWHM) with Tc-99m (140keV). In the presence of a magnetic field, we have seen no spatial distortion of the resultant image and have obtained an energy resolution of 31±3.</p> / Master of Science (MSc)
288

Modal and Pentatonic Motives in the Music of HIM

Tinajero Perez, Andrea 16 September 2022 (has links)
No description available.
289

The modality shift effect and the effectiveness of warning signals in different modalities

Rodway, Paul January 2005 (has links)
No / Which is better, a visual or an auditory warning signal? Initial findings suggested that an auditory signal was more effective, speeding reaction to a target more than a visual warning signal, particularly at brief foreperiods [Bertelson, P., & Tisseyre, F. (1969). The time-course of preparation: confirmatory results with visual and auditory warning signals. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 145-154); Davis, R., & Green, F. A. (1969). Intersensory differences in the effect of warning signals on reaction time. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 155-167)]. This led to the hypothesis that an auditory signal is more alerting than a visual warning signal [Sanders, A. F. (1975). The foreperiod effect revisited. Quarterly Journal of Experimental Psychology, 27, 591-598; Posner, M. I., Nissen. M. J., & Klein, R. M. (1976). Visual dominance: an information-processing account of its origins and significance. Psychological Review, 83, 157-171]. Recently [Turatto, M., Benso, F., Galfano, G., & Umilta, C. (2002). Nonspatial attentional shifts between audition and vision. Journal of Experimental Psychology; Human Perception and Performance, 28, 628-639] found no evidence for an auditory warning signal advantage and showed that at brief foreperiods a signal in the same modality as the target facilitated responding more than a signal in a different modality. They accounted for this result in terms of the modality shift effect, with the signal exogenously recruiting attention to its modality, and thereby facilitating responding to targets arriving in the modality to which attention had been recruited. The present study conducted six experiments to understand the cause of these conflicting findings. The results suggest that an auditory warning signal is not more effective than a visual warning signal. Previous reports of an auditory superiority appear to have been caused by using different locations for the visual warning signal and visual target, resulting in the target arriving at an unattended location when the foreperiod was brief. Turatto et al.'s results were replicated with a modality shift effect at brief foreperiods. However, it is also suggested that previous measures of the modality shift effect may still have been confounded by a location cuing effect.
290

M3D: Multimodal MultiDocument Fine-Grained Inconsistency Detection

Tang, Chia-Wei 10 June 2024 (has links)
Validating claims from misinformation is a highly challenging task that involves understanding how each factual assertion within the claim relates to a set of trusted source materials. Existing approaches often make coarse-grained predictions but fail to identify the specific aspects of the claim that are troublesome and the specific evidence relied upon. In this paper, we introduce a method and new benchmark for this challenging task. Our method predicts the fine-grained logical relationship of each aspect of the claim from a set of multimodal documents, which include text, image(s), video(s), and audio(s). We also introduce a new benchmark (M^3DC) of claims requiring multimodal multidocument reasoning, which we construct using a novel claim synthesis technique. Experiments show that our approach significantly outperforms state-of-the-art baselines on this challenging task on two benchmarks while providing finer-grained predictions, explanations, and evidence. / Master of Science / In today's world, we are constantly bombarded with information from various sources, making it difficult to distinguish between what is true and what is false. Validating claims and determining their truthfulness is an essential task that helps us separate facts from fiction, but it can be a time-consuming and challenging process. Current methods often fail to pinpoint the specific parts of a claim that are problematic and the evidence used to support or refute them. In this study, we present a new method and benchmark for fact-checking claims using multiple types of information sources, including text, images, videos, and audio. Our approach analyzes each aspect of a claim and predicts how it logically relates to the available evidence from these diverse sources. This allows us to provide more detailed and accurate assessments of the claim's validity. We also introduce a new benchmark dataset called M^3DC, which consists of claims that require reasoning across multiple sources and types of information. To create this dataset, we developed a novel technique for synthesizing claims that mimic real-world scenarios. Our experiments show that our method significantly outperforms existing state-of-the-art approaches on two benchmarks while providing more fine-grained predictions, explanations, and evidence. This research contributes to the ongoing effort to combat misinformation and fake news by providing a more comprehensive and effective approach to fact-checking claims.

Page generated in 0.0314 seconds