• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 21
  • 2
  • 1
  • Tagged with
  • 70
  • 33
  • 30
  • 28
  • 28
  • 20
  • 20
  • 20
  • 20
  • 19
  • 16
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

N-glycosylation signaling pathways in oral squamous cell carcinoma

Almershed, Munirah EME 28 September 2016 (has links)
Oral squamous cell carcinoma (OSCC) accounts for majority of head and neck cancers and ranks as the sixth most common cancer in the world. OSCC belongs to the most understudied cancers and little is known about molecular mechanisms underlying its etiology and progression to metastasis. A hallmark of cancer is the enhanced posttranslational modification of cell surface proteins with complex N-glycans. Our studies have shown that induced protein N-glycosylation via activation of the core N-glycosylation-regulating gene, DPAGT1, is associated with reduced E-cadherin adhesion, as well as deregulation of several oncogenic signaling pathways, including Wnt/β-catenin and Hippo. Modest increases in DPAGT1 expression are associated with dramatic amplification of Wnt/β-catenin activity and increased expression and nuclear localization of the Hippo pathway effectors TAZ /YAP. The goal of this study was to align the expression and localization of DPAGT1, complex N-glycans, β-catenin, and TAZ/YAP with the progression of oral cancer in vivo from dysplasia to OSCC. Human oral tissues from different stages of OSCC pathogenesis were characterized for DPAGT1/β-catenin/α-catenin/YAP/TAZ expression and localization and correlated with cell surface expression of complex N-glycans by PHA lectin staining and with expression of primitive cell surface markers, CD44, CD24 and CD29. Results showed that high DPAGT1 expression and nuclear TAZ became increasingly associated with disorganized E-cadherin junctions as oral epithelium progressed from mild to severe dysplasia to OSCC. This correlated with increasing expression of cell surface complex N-glycans and CD44. These studies suggest that DPAGT1/β-catenin/TAZ and high PHA staining represent novel signatures for OSCC pathogenesis.
22

Characterization of CAL 27 and HSC-3 cell lines. DPAGT1 gene expression and association with oral squamous cell carcinoma genesis and metastasis

Rodriguez, Angel E. 28 September 2016 (has links)
Cancer, a disease of an uncontrolled cell division, growth and metastasis as a result of genetic mutations, environmental factors and host response, is affecting populations worldwide. Etiology, pathogenicity, and genetics related to cancer are not well understood, and treatment has not been as effective as scientists have expected. Continual research is being done to improve current understanding and treatments. Oral squamous cell carcinoma (OSCC) is one of the most common head and neck cancers (representing >90 % of all head and neck cancers) involving neoplasms of the oral cavity and oropharynx. OSCC is a very pernicious malignancy developed from epithelial cells. There is evidence that a key N-glycosylation gene, DPAGT1, is associated with cancer. Although N-glycosylation of proteins is involved in organ development and homeostasis of tissue, overexpression of DPAGT1 has been implicated in oral cancer initiation and metastasis. Defects in N-glycosylation underlie congenital disorders, while hyper-N-glycosylation has been shown to be a feature of many cancers. The N-glycosylation pathway directs cell adhesion and cytoskeletal dynamics by impacting the function of E-cadherin, a major epithelial cell-cell adhesion receptor. E-cadherin is a tumor suppressor responsible for the organization of multiprotein complexes named adherens junctions (AJs). In epithelial cells, stable AJs are essential for several cellular processes, including inhibition of cell proliferation, reorganization of the actin cytoskeleton, and maintenance of an epithelial phenotype. Indeed, restoration of AJs has been shown to revert cancer cells from a mesenchymal to an epithelial phenotype and to reduce invasiveness. Previous work has shown that upregulation of DPAGT1 plays a pivotal role in driving canonical WNT/β-catenin signaling (also known as canonical Wnt signaling) that represses E-cadherin adhesions and drives tumorigenic phenotypes in oral cancer. This suggests a role in coordinating balance between proliferation and adhesion by DPAGT1. To date, little is known about the molecular and cellular details underlying differences among OSCC cell lines. CAL 27 and HSC-3 are human cancer cell lines commonly used to in laboratory OSCC research. The main differences between these cell lines include capsular tumors formed by CAL27 cells in nude mouse models in contrast to non-capsular and invasive tumors formed by HSC-3 cells. The goal of this study was to characterize biochemical differences between these two cell lines for further research.
23

A thesis that writes itself : On the threat of AI-generated essays within academia

Olsson, August, Engelbrektsson, Oscar January 2022 (has links)
Historically, cheating in universities has been limited to smuggling notes into exams, unauthorized cooperation, plagiarism and using ghost writers. New improvements in natural language processing now allow students to easily generate text, that is both unique and, in many ways, indistinguishable from what a human would create. These texts can then be submitted with little to no risk of getting caught by anti-cheating software. There are currently a multitude of such text generators online, which vary in ease of use, cost and capabilities. They are capable enough to generate unique text which will evade plagiarism-tools employed by universities. If you combine relatively cheap pricing, ease of use, pressure to perform well in school and low risk of detection. It is not too difficult to imagine that students will use tools like these to cheat. This thesis mainly focuses on whether humans can differentiate AI-generated essays from human written ones and what countermeasures can be used to hinder its use. By giving teachers at Halmstad University human and AI-generated text; then asking them to guess the source of text presented. The experiment concluded that teachers' ability to differentiate AI-generated text from human written text could not be proven.  This thesis also surveys the currently available detection methods for AI-generated text and determines that they are not sufficient in their current form. Lastly, this thesis showcases alternative examination methods that could be used instead of essay-style examinations.
24

Går det att lita på ChatGPT? En kvalitativ studie om studenters förtroende för ChatGPT i lärandesammanhang

Härnström, Alexandra, Bergh, Isak Eljas January 2023 (has links)
Världens tekniska utveckling går framåt i snabb takt, inte minst när det kommer till ”smarta” maskiner och algoritmer med förmågan att anpassa sig efter sin omgivning. Detta delvis på grund av den enorma mängd data som finns tillgänglig och delvis tack vare en ökad lagringskapacitet. I november 2022 släpptes ett av de senaste AI-baserade programmen; chatboten ChatGPT. Inom två månader hade ChatGPT fått över 100 miljoner användare. Denna webbaserade mjukvara kan i realtid konversera med användare genom att besvara textbaserade frågor. Genom att snabbt och ofta korrekt besvara användarnas frågor på ett mänskligt och övertygande sätt, har tjänsten på kort tid genererat mycket uppmärksamhet. Det finns flera studier som visar på hur ett stort antal människor saknar ett generellt förtroende för AI. Vissa studier menar att de svar som ChatGPT genererar inte alltid kan antas vara helt korrekta och därför bör följas upp med en omfattande kontroll av faktan, eftersom de annars kan bidra till spridandet av falsk information. Eftersom förtroende för AI har visat sig vara en viktig del i hur väl teknologin utvecklas och integreras, kan brist på förtroende för sådana tjänster, såsom ChatGPT, vara ett hinder för en välfungerande användning. Trots att man sett på ökad produktivitet vid införandet av AI-teknologi hos företag så har det inom högre utbildning, som ett hjälpmedel för studenter, inte integrerats i samma utsträckning. Genom att ta reda på vilket förtroende studenter har för ChatGPT i lärandesammanhang, kan man erhålla information som kan vara till hjälp för integrationen av sådan AI-teknik. Dock saknas det specifik forskning kring studenters förtroende för ChatGPT i lärandesammanhang. Därför syftar denna studie till att fylla denna kunskapslucka, genom att utföra en kartläggning. Vår frågeställning är: ” Vilket förtroende har studenter för ChatGPT i lärandesammanhang?”. Kartläggningen utfördes med semistrukturerade intervjuer av åtta studenter som använt ChatGPT i lärandesammanhang. Intervjuerna genererade kvalitativa data som analyserades med tematisk analys, och resultatet visade på att studenters förtroende för ChatGPT i lärandesammanhang beror på en rad faktorer. Under analysen identifierade vi sex teman som ansågs vara relevanta för att besvara frågeställningen: ● Erfarenheter ● Användning ● ChatGPT:s karaktär ● Yttre påverkan ● Organisationer ● Framtida förtroende / The world's technological development is advancing rapidly, especially when it comes to "smart" machines and algorithms with the ability to adapt to their surroundings. This is partly due to the enormous amount of available data and partly thanks to increased storage capacity. In November 2022, one of the latest AI-based programs was released; the chatbot ChatGPT. This web-based software can engage in real-time conversations with users by answering text-based questions. By quickly, and often accurately, answering users' questions in a human-like and convincing manner, the service has generated a lot of attention in a short period of time. Within two months, ChatGPT had over 100 million users. There are several studies that show how a large number of people lack a general trust in AI. Some studies argue that the responses generated by ChatGPT may not always be assumed to be completely accurate and should therefore be followed up with extensive fact-checking, as otherwise they may contribute to the spreading of false information. Since trust in AI has been shown to be an important part of how well the technology develops and integrates, a lack of trust in services like ChatGPT can be a hindrance to effective usage. Despite the increased productivity observed in the implementation of AI technology in companies, it has not been integrated to the same extent within higher education as an aid for students. By determining the level of trust that students have in ChatGPT in an educational context, valuable information can be obtained to assist in the integration of such AI technology. However, there is a lack of specific research on students' trust in ChatGPT in an educational context. Therefore, this study aims to fill this knowledge gap by conducting a survey. Our research question is: “What trust do students have in ChatGPT in a learning context?”. The survey was conducted through semi-structured interviews with eight students who have used ChatGPT in an educational context. The interviews generated qualitative data that was analyzed using thematic analysis, and the results showed that students' trust in ChatGPT in an educational context depends on several factors. During the analysis, six themes were identified as relevant for answering the research question: • Experiences • Usage • ChatGPT’s character • Influences • Organizations • Future trust
25

Educational Artificial Intelligent Chatbot:Teacher Assistant & Study Buddy

Zarris, Dimitrios, Sozos, Stergios January 2023 (has links)
In the rapidly evolving landscape of artificial intelligence, the potential of large language models (LLMs) remains a focal point of exploration, especially in the domain of education. This research delves into the capabilities of AI-enhanced chatbots, with a spotlight on the "Teacher Assistant" & "Study Buddy" approaches. The study highlights the role of AI in offering adaptive learning experiences and personalized recommendations. As educational institutions and platforms increasingly turn to AI-driven solutions, understanding the intricacies of how LLMs can be harnessed to create meaningful and accurate educational content becomes paramount.The research adopts a systematic and multi-faceted methodology. At its core, the study investigates the interplay between prompt construction, engineering techniques, and the resulting outputs of the LLM. Two primary methodologies are employed: the application of prompt structuring techniques and the introduction of advanced prompt engineering methods. The former involves a progressive application of techniques like persona and template, aiming to discern their individual and collective impacts on the LLM's outputs. The latter delves into more advanced techniques, such as the few-shot prompt and chain-of-thought prompt, to gauge their influence on the quality and characteristics of the LLM's responses. Complementing these is the "Study Buddy" approach, where curricula from domains like biology, mathematics, and physics are utilized as foundational materials for the experiments.The findings from this research are poised to have significant implications for the future of AI in education. By offering a comprehensive understanding of the variables that influence an LLM's performance, the study paves the way for the development of more refined and effective AI-driven educational tools. As educators and institutions grapple with the challenges of modern education, tools that can generate accurate, relevant, and diverse educational content can be invaluable. This thesis not only contributes to the academic understanding of LLMs and provides practical insights that can shape the future of AI-enhanced education, but as education continues to evolve, the findings underscore the need for ongoing exploration and refinement to fully leverage AI's benefits in the educational sector
26

Kan AI agera journalist? : En undersökning av GPT-4:s förmåga att generera nyhetsartiklar

Janouch, Jacob January 2023 (has links)
Den här uppsatsen undersöker artificiell intelligens (AI):s förmåga att producera nyhetsartiklar, vad bristerna och styrkorna med AI-genererade artiklar är samt vilka etiska problem som finns med att implementera AI i journalistiska processer. Mer specifikt har GPT-4, som i skrivande stund är en ny men kraftig språkmodell, använts för att generera artiklarna som undersökts. I studien har sex deltagare blivit exponerade för totalt nyhetsartiklar varav fem var AI-genererade och fem var mänskliga. Deltagarna har sedan, utan att få veta vem som skrivit artiklarna, fått uttrycka sina tankar och känslor om artiklarna och AI-genererade nyheteröverlag i syfte att öka förståelsen för hur artiklarna upplevdes. Resultatet från undersökningen visar att deltagarna överlag hade svårt att identifiera vilka artiklar som var genererade av AI. Ofta gissade de att en människa hade skrivit texten, fast den var AI-genererad. Detta varierade dock något mellan texterna. Vissa av de AI-genererade artiklarna hade brister som gjorde att deltagarna kunde identifiera dem som icke-mänskliga. Exempel på brister var att de innehöll upprepningar, konstiga formuleringar, upplevdes som opersonliga eller innehöll politiskt vinklade budskap. De innehöll även faktafel. De AI-genererade artiklarna kunde dock, trots vissa brister, ofta övertyga läsarna om att de var skrivna av en människa. Ett inslag som visade sig vara särskilt effektivt var förekomsten av antropomorfiska, eller mänskliga inslag i artiklarna. Mot undersökningens slut fick deltagarna frågan hur de ser på att eventuellt läsa AI-genererade nyheter i framtiden. Deras åsikter visade sig vara blandade - vissa var positiva till att läsa algoritmiskt genererade nyheter men vissa var också skeptiska. Slutsatsen som kan dras utifrån litteraturstudien i kombination med resultatet från undersökningen är att även om generativ AI som GPT-4 är på god väg mot att kunna generera språkmässigt passerbara nyhetsartiklar, bör beslut om att implementera tekniken noga övervägas, inte minst på grund av de etiska problem som kan uppstå när AI agerar journalist.
27

Sustainable Recipe Recommendation System: Evaluating the Performance of GPT Embeddings versus state-of-the-art systems

Bandaru, Jaya Shankar, Appili, Sai Keerthi January 2023 (has links)
Background: The demand for a sustainable lifestyle is increasing due to the need to tackle rapid climate change. One-third of carbon emissions come from the food industry; reducing emissions from this industry is crucial when fighting climate change. One of the ways to reduce carbon emissions from this industry is by helping consumers adopt sustainable eating habits by consuming eco-friendly food. To help consumers find eco-friendly recipes, we developed a sustainable recipe recommendation system that can recommend relevant and eco-friendly recipes to consumers using little information about their previous food consumption.  Objective: The main objective of this research is to identify (i) the appropriate recommendation algorithm suitable for a dataset that has few training and testing examples, and (ii) a technique to re-order the recommendation list such that a proper balance is maintained between relevance and carbon rating of the recipes. Method: We conducted an experiment to test the performance of a GPT embeddings-based recommendation system, Factorization Machines, and a version of a Graph Neural Network-based recommendation algorithm called PinSage for a different number of training examples and used ROC AUC value as our metric. After finding the best-performing model we experimented with different re-ordering techniques to find which technique provides the right balance between relevance and sustainability. Results: The results from the experiment show that the PinSage and Factorization Machines predict on average whether an item is relevant or not with 75% probability whereas GPT-embedding-based recommendation systems predict with only 55% probability. We also found the performance of PinSage and Factorization Machines improved as the training set size increased. For re-ordering, we found using a loga- rithmic combination of the relevance score and carbon rating of the recipe helped to reduce the average carbon rating of recommendations with a marginal reduction in the ROC AUC score.  Conclusion: The results show that the chosen state-of-the-art recommendation systems: PinSage and Factorization Machines outperform GPT-embedding-based recommendation systems by almost 1.4 times.
28

AI: A helping hand for digital marketing agencies? : AI: En hjälpande hand för digitala marknadsföringsbyråer?

Ekman, Hampus, Strand, Erik January 2024 (has links)
This study evaluates whether generative AI tools built on the language model GPT-3 can streamline the processes of digital marketing agencies. The method used for gathering qualitative data was two sets of semi-structured individual interviews with different digital marketing agencies. The agencies were interviewed regarding frequent processes, AI usage, and attitudes toward the technology. Two ChatGPT experiments were conducted to get the interviewees’ insights on its use and the results. The data was categorized with the help of qualitative content analysis. Previous research and journals were additionally used to discuss the potential and consequences of AI, GPT in general, and GPT-3. Information about the different tools that use GPT-3 was collected through websites, articles, and blogs. The study’s data shows that tools using GPT-3 can streamline repetitive or time-consuming processes within ideation, content production, data analysis, personalized customer interactions, and increase productivity within digital marketing agencies. The tools’ tendencies to produce discriminating, faulty, generic, or uncreative information nevertheless create the need for constant human monitoring, source criticism, post-processing, and complementing with creative inputs. Researchers recommend the method of post-processing generative AI results. Digital marketing agencies have already begun implementing this method. Agencies’ attitudes toward the technology’s future within the industry are generally positive. The technology might, according to the interviewed agencies, become a threat to digital marketing professions in the future. This threat may occur if AI develops the creative ability to produce material that evokes emotions in the same way humans currently can. The agencies also believe that the technological change within the industry will come with new copyright laws, regulations, and pricing structures emphasizing creativity and competence.
29

Regularized Fine-tuning Strategies for Neural Language Models : Application of entropy regularization on GPT-2

Hong, Jae Eun January 2022 (has links)
Deep neural language models like GPT-2 is undoubtedly strong at text generation, but often requires special decoding strategies to prevent producing degenerate output - namely repetition. The use of maximum likelihood training objective results in a peaked probability distribution, leading to the over-confidence of neural networks. In this thesis, we explore entropy regularization for a neural language model that can easily smooth peaked output distribution during the fine-tuning process employing GPT-2. We first define the models in three ways: (1) Out of-the box model without fine-tuning process, (2) Fine-tuned model without entropy regularization, and (3) Fine-tuned model with entropy regularization. To investigate the effect of domains on the model, we also divide the dataset into three ways: (1) fine-tuned on heterogeneous dataset, tested on heterogeneous dataset, (2) fine-tuned on homogeneous dataset, tested on homogeneous dataset, and (3) fine-tuned on heterogeneous dataset, tested on homogeneous dataset. In terms of entropy regularization, we experiment controlling the entropy strength parameter (𝛽) in the range of [0.5, 1.0, 2.0, 4.0, 6.0] and annealing the parameter during fine-tuning process. Our findings prove that the entropy-based regularization during fine-tuning process improve the text generation models by significantly reducing the repetition rate without tuning the decoding strategies. As a result of comparing the probabilities of human-generated sentence tokens, it was observed that entropy regularization compensates for the shortcomings of the deterministic decoding method (Beam search) that mostly selects few high-probability words. Various studies have explored entropy regularization in the cold-start training process of neural networks. However, there are not many studies covering the effect of the fine-tuning stage of text generation tasks when employing large scale pre-trained language models. Our findings present strong evidence that one can achieve significant improvement in text generation by way of utilizing entropy regularization, a highly cost-effective approach, during the fine-tuning process.
30

Finding structure in passwords : Using transformer models for password segmentation

Eneberg, Lina January 2024 (has links)
Passwords are common figures in everyone’s everyday life. One person has in average80 accounts for which they are supposed to use different passwords. Remembering allthese passwords is difficult and leads to people reusing, or reusing with slight modification,passwords on many accounts. Studies on memory show that information relating tosomething personal is more easily remembered. This is likely the reason as to why manypeople use passwords relating to either self, relatives, lovers, friends, or pets. Hackers will most often use either brute force or dictionary attacks to crack a password.These techniques can be quite time consuming so using machine learning could bea faster and easier approach. Segmenting someone’s previous passwords into meaningfulunits often reveals personal information about the creator and can thus be used as a basisfor password guessing. This report focuses on evaluating different sizes of the GPT-SW3model, which uses a transformer architecture, on password segmentation. The purposeis to find out if the GPT-SW3 model is suitable to use as a password segmenter and byextension if it can be used for password guessing. As training data, a list of passwords collected from a security breach on a platformcalled RockYou was used. The passwords were segmented by the author to provide themodel with a correct answer to learn from. The evaluation metric, Exact Match, checksif the model’s prediction is the same as that of the author. There were no positive resultswhen training GPT-SW3, most likely because of technical limitations. As the results arerather insufficient, future studies are required to prove or disprove the assumptions thisthesis is based on.

Page generated in 0.0302 seconds