• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 46
  • 31
  • 29
  • 27
  • 26
  • 14
  • 8
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 383
  • 61
  • 40
  • 39
  • 31
  • 30
  • 29
  • 28
  • 24
  • 23
  • 23
  • 21
  • 21
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Things That Make You Go “Hmmm”: Effects of Gender Measurement Format on Positive/Negative Mood

Ferguson, Claire E. 28 January 2021 (has links)
No description available.
342

An evaluation of U-Net’s multi-label segmentation performance on PDF documents in a medical context / En utvärdering av U-Nets flerklassiga segmenteringsprestanda på PDF-dokument i ett medicinskt sammanhang

Sebek, Fredrik January 2021 (has links)
The Portable Document Format (PDF) is an ideal format for viewing and printing documents. Today many companies store their documents in a PDF format. However, the conversion from a PDF document to any other structured format is inherently difficult. As a result, a lot of the information contained in a PDF document is not directly accessible - this is problematic. Manual intervention is required to accurately convert a PDF into another file format - this can be deemed as both strenuous and exhaustive work. An automated solution to this process could greatly improve the accessibility to information in many companies. A significant amount of literature has investigated the process of extracting information from PDF documents in a structured way. In recent years these methodologies have become heavily dependent on computer vision. The work on this paper evaluates how the U-Net model handles multi-label segmentation on PDF documents in a medical context - extending on Stahl et al.’s work in 2018. Furthermore, it compares two newer extensions of the U-Net model, MultiResUNet (2019) and SS-U-Net (2021). Additionally, it assesses how each of the models performs in a data-sparse environment. The three models were implemented, trained, and then evaluated. Their performance was measured using the Dice coefficient, Jaccard coefficient, and percentage similarity. Furthermore, visual inspection was also used to analyze how the models performed from a perceptual standpoint. The results indicate that both the U-Net and the SS-U-Net are exceptional at segmenting PDF documents effectively in a data abundant environment. However, the SS-U-Net outperformed both the U-Net and the MultiResUNet in the data-sparse environment. Furthermore, the MultiResUNet significantly underperformed in comparison to both the U-Net and SS-U-Net models in both environments. The impressive results achieved by the U-Net and SS-U-Net models suggest that it can be combined with a larger system. This proposed system allows for accurate and structured extraction of information from PDF documents. / Portable Document Format (PDF) är ett välfungerande format för visning och utskrift av dokument. I dagsläget väljer många företag därmed att lagra sina dokument i PDF-format. Konvertering från PDF format till någon annan typ av strukturerat format är dock svårt, och detta resulterar i att mycket av informationen i PDF-dokumenten är svårtillgängligt, vilket är problematiskt för de företag som arbetar med detta filformat. Det krävs manuellt arbete för att konvertera en PDF till ett annat filformat - detta kan betraktas som både ansträngande och uttömmande arbete. En automatiserad lösning på denna process skulle kunna förbättra tillgängligheten av information för många företag. En stor mängd litteratur har undersökt processen att extrahera information från PDF-dokument på ett strukturerat sätt. På senare tid har dessa metoder blivit starkt beroende av datorseende. Den här forskningen utvärderar hur U-Net-modellen hanterar segmentering av PDF dokument, baserat på flerfaldiga etiketter, i ett medicinskt sammanhang. Arbetet är en utökning av Stahl et al. forskning från 2018. Dessutom jämförs två nyare utökade varianter av U-Net-modellen , MultiResUNet (2019) och SS-U-Net (2021). Utöver detta så utvärderas även varje modell utefter hur den presterar i en gles datamiljö. De tre modellerna implementerades, utbildades och utvärderades. Deras prestanda mättes med hjälp av Dice-koefficienten, Jaccard-koefficienten och procentuell likhet. Vidare så görs även en visuell inspektion för att analysera hur modellerna presterar utifrån en perceptuell synvinkel. Resultaten tyder på att både U-Net och SS-U-Net är exceptionella när det gäller att segmentera PDF-dokument i en riklig datamiljö. SS-U-Net överträffade emellertid både U-Net och MultiResUNet i den glesa datamiljön. Dessutom underpresterade MultiResUNet signifikant i jämförelse med både U-Net och SS-U-Net modellen i båda miljöerna. De imponerande resultaten som uppnåtts av modellerna U-Net och SS-U-Net tyder på att de kan kombineras med ett större system. Detta föreslagna systemet möjliggör korrekt och strukturerad extrahering av information från PDF-dokument.
343

Multiple intelligences theory in English language teaching: An analysis of current textbooks, materials and teachers’ perceptions

Botelho, Maria Do Rozário de Lima January 2003 (has links)
No description available.
344

Sing the Body Electric

Takacs, Stephen R. 24 August 2012 (has links)
No description available.
345

Fighting Unstructured Data with Formatting Methods : Navigating Crisis Communication: The Role of CAP in Effective Information Dissemination / Bekämpar ostrukturerad data med formateringsmetoder : Att navigera i kriskommunikation: CAP:s roll i effektiv informationsspridning

Spridzans, Alfreds January 2024 (has links)
This study investigates the format of crisis communication by analysing a news archive dataset from Krisinformation.se, a Swedish website dedicated to sharing information about crises. The primary goal is to assess the dataset's structure and efficacy in meeting the Common Alerting Protocol (CAP) criteria, an internationally recognised format for emergency alerts. The study uses quantitative text analysis and data preprocessing tools like Python and Power Query to identify inconsistencies in the present dataset format. These anomalies limit the dataset's usefulness for extensive research and effective crisis communication. To address these issues, the study constructs two new datasets with enhanced column structures that rectify the identified problems. These refined datasets aim to improve the clarity and accessibility of information regarding crisis events, providing valuable insights into the nature and frequency of these incidents. Additionally, the research offers practical recommendations for optimising the dataset format to better align with CAP standards, enhancing the overall effectiveness of crisis communication on the platform. The findings highlight the critical role of structured and standardised data formats in crisis communication, particularly in the context of increasing climate-related hazards and other emergencies. By improving the dataset format, the study contributes to more efficient data analysis and better preparedness for future crises. The insights gained from this research are intended to assist other analysts and researchers in conducting more robust studies, ultimately aiding in developing more resilient and responsive crisis communication strategies. / Denna studie undersöker formatet för kriskommunikation genom att analysera ett nyhetsarkiv från Krisinformation.se, en svensk hemsida som är avsedd att dela information om kriser. Det primära målet är att bedöma datasetets struktur och effektivitet när det gäller att uppfylla kriterierna för Common Alerting Protocol (CAP), ett internationellt erkänt format för nödmeddelanden. I studien används kvantitativ textanalys och dataförberedande verktyg som Python och Power Query för att identifiera inkonsekvenser i det aktuella datasetformatet. Dessa anomalier begränsar datasetets användbarhet för omfattande forskning och effektiv kriskommunikation. För att ta itu med dessa frågor konstruerar studien två nya dataset med förbättrade kolumnstrukturer som åtgärdar de identifierade problemen. Dessa förfinade dataset syftar till att förbättra tydligheten och tillgängligheten av information om krishändelser, vilket ger värdefulla insikter om dessa händelsers karaktär och frekvens. Dessutom ger forskningen praktiska rekommendationer för att optimera datasetformatet så att det bättre överensstämmer med CAP-standarderna, vilket förbättrar den övergripande effektiviteten i kriskommunikationen på plattformen. Resultaten visar att strukturerade och standardiserade dataformat spelar en avgörande roll för kriskommunikation, särskilt i samband med ökande klimatrelaterade faror och andra nödsituationer. Genom att förbättra formatet på datasetet bidrar studien till effektivare dataanalys och bättre beredskap för framtida kriser. Insikterna från denna forskning är avsedda att hjälpa andra analytiker och forskare att genomföra mer robusta studier, vilket i slutändan bidrar till att utveckla mer motståndskraftiga och lyhörda strategier för kriskommunikation.
346

Разработка методических рекомендаций по подготовке цифровой информационной модели к проверке в ГАУ СО «Управление государственной экспертизы» : магистерская диссертация / Development of methodological recommendations for the preparation of a digital information model for verification in the State Autonomous Institution of the Sverdlovsk Region «Office of State Expertise»

Вавилов, И. Е., Vavilov, I. E. January 2024 (has links)
Автором диссертации был проведен анализ требований ГАУ СО «Управление государственной экспертизы» к трехмерным моделям архитектурных и объемно-планировочных, конструктивных решений необходимых для прохождения экспертизы при использовании технологий информационного моделирования. Создана модель жилого здания раздела архитектурных и объемно-планировочных решений, конструктивных решений, описана технология информационного наполнения модели. Разработаны методические рекомендаций по подготовке цифровой информационной модели к проверке в ГАУ СО «Управление государственной экспертизы». / The author of the dissertation analyzed the requirements of the State Autonomous Institution of the Sverdlovsk Region «Office of State Expertise» for three-dimensional models of architectural and spatial planning, constructive solutions necessary for passing an examination using information modeling technologies. A model of a residential building has been created for the section of architectural and spatial planning solutions, constructive solutions, and the technology of information filling of the model is described. Methodological recommendations have been developed for the preparation of a digital information model for verification in the State Autonomous Institution of the Sverdlovsk Region «Office of State Expertise».
347

Développement d'un alphabet structural intégrant la flexibilité des structures protéiques / Development of a structural alphabet integrating the flexibility of protein structures

Sekhi, Ikram 29 January 2018 (has links)
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines / The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
348

Převod trojúhelníkových polygonálních 3D sítí na 3D spline plochy / 3D Triangles Polygonal Mesh Conversion on 3D Spline Surfaces

Jahn, Zdeněk Unknown Date (has links)
In computer graphics we can handle unstructured triangular 3D meshes which are not too usable for processing through their irregularity. In these situations it occurs need of conversion that 3D mesh to more suitable representation. Some kind of 3D spline surface can be proper alternative because it institutes regularity in the form of control points grid and that's why it is more suitable for next processing. During conversion, which is described in this thesis, quadrilateral 3D mesh is constructed at first. This mesh has regular structure but mainly the structure corresponds to structure of control points grid of resulting 3D spline surface. Created quadrilateral 3D mesh can be saved and consequently used in specific modeling applications for T-spline surface creation.
349

The theme of protest and its expression in S. F. Motlhake's poetry

Tsambo, T. L. (Theriso Louisa) 06 1900 (has links)
In the Apartheid South Africa, repression and the heightening of the Blacks' struggle for political emancipation, prompted artists to challenge the system through their music, oral poetry and writing. Most produced works of protest in English to reach a wider audience. This led to the general misconception that literatures in the indigenous languages of South Africa were insensitive to the issues of those times. This study seeks firstly to put to rest such misconception by proving that there is Commitment in these literatures as exemplified in the poetry of S.F. Motlhake. Motlhake not only expresses protest against the political system of the time, but also questions some religious and socio-cultural practices and institutions among his people. The study also examines his selected works as genuine poetry, which does not sacrifice art on the altar of propaganda. / African Languages / M.A. (African Languages)
350

Information triage : dual-process theory in credibility judgments of web-based resources

Aumer-Ryan, Paul R. 29 September 2010 (has links)
This dissertation describes the credibility judgment process using social psychological theories of dual-processing, which state that information processing outcomes are the result of an interaction “between a fast, associative information- processing mode based on low-effort heuristics, and a slow, rule-based information processing mode based on high-effort systematic reasoning” (Chaiken & Trope, 1999, p. ix). Further, this interaction is illustrated by describing credibility judgments as a choice between examining easily identified peripheral cues (the messenger) and content (the message), leading to different evaluations in different settings. The focus here is on the domain of the Web, where ambiguous authorship, peer- produced content, and the lack of gatekeepers create an environment where credibility judgments are a necessary routine in triaging information. It reviews the relevant literature on existing credibility frameworks and the component factors that affect credibility judgments. The online encyclopedia (instantiated as Wikipedia and Encyclopedia Britannica) is then proposed as a canonical form to examine the credibility judgment process. The two main claims advanced here are (1) that information sources are composed of both message (the content) and messenger (the way the message is delivered), and that the messenger impacts perceived credibility; and (2) that perceived credibility is tempered by information need (individual engagement). These claims were framed by the models proposed by Wathen & Burkell (2002) and Chaiken (1980) to forward a composite dual process theory of credibility judgments, which was tested by two experimental studies. The independent variables of interest were: media format (print or electronic); reputation of source (Wikipedia or Britannica); and the participant’s individual involvement in the research task (high or low). The results of these studies encourage a more nuanced understanding of the credibility judgment process by framing it as a dual-process model, and showing that certain mediating variables can affect the relative use of low-effort evaluation and high- effort reasoning when forming a perception of credibility. Finally, the results support the importance of messenger effects on perceived credibility, implying that credibility judgments, especially in the online environment, and especially in cases of low individual engagement, are based on peripheral cues rather than an informed evaluation of content. / text

Page generated in 0.0435 seconds