• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Proteomics Studies of Subjects with Alzheimer’s Disease and Chronic Pain

Emami Khoonsari, Payam January 2017 (has links)
Alzheimer’s disease (AD) is a neurodegenerative disease and the major cause of dementia, affecting more than 50 million people worldwide. Chronic pain is long-lasting, persistent pain that affects more than 1.5 billion of the world population. Overlapping and heterogenous symptoms of AD and chronic pain conditions complicate their diagnosis, emphasizing the need for more specific biomarkers to improve the diagnosis and understand the disease mechanisms. To characterize disease pathology of AD, we measured the protein changes in the temporal neocortex region of the brain of AD subjects using mass spectrometry (MS). We found proteins involved in exo-endocytic and extracellular vesicle functions displaying altered levels in the AD brain, potentially resulting in neuronal dysfunction and cell death in AD. To detect novel biomarkers for AD, we used MS to analyze cerebrospinal fluid (CSF) of AD patients and found decreased levels of eight proteins compared to controls, potentially indicating abnormal activity of complement system in AD. By integrating new proteomics markers with absolute levels of Aβ42, total tau (t-tau) and p-tau in CSF, we improved the prediction accuracy from 83% to 92% of early diagnosis of AD. We found increased levels of chitinase-3-like protein 1 (CH3L1) and decreased levels of neurosecretory protein VGF (VGF) in AD compared to controls. By exploring the CSF proteome of neuropathic pain patients before and after successful spinal cord stimulation (SCS) treatment, we found altered levels of twelve proteins, involved in neuroprotection, synaptic plasticity, nociceptive signaling and immune regulation. To detect biomarkers for diagnosing a chronic pain state known as fibromyalgia (FM), we analyzed the CSF of FM patients using MS. We found altered levels of four proteins, representing novel biomarkers for diagnosing FM. These proteins are involved in inflammatory mechanisms, energy metabolism and neuropeptide signaling. Finally, to facilitate fast and robust large-scale omics data handling, we developed an e-infrastructure. We demonstrated that the e-infrastructure provides high scalability, flexibility and it can be applied in virtually any fields including proteomics. This thesis demonstrates that proteomics is a promising approach for gaining deeper insight into mechanisms of nervous system disorders and find biomarkers for diagnosis of such diseases.
112

Automatiserade regressionstester avseende arbetsflöden och behörigheter i ProjectWise. : En fallstudie om ProjectWise på Trafikverket / Automated regression tests regarding workflows and permissions in ProjectWise.

Ograhn, Fredrik, Wande, August January 2016 (has links)
Test av mjukvara görs i syfte att se ifall systemet uppfyller specificerade krav samt för att hitta fel. Det är en viktig del i systemutveckling och involverar bland annat regressionstestning. Regressionstester utförs för att säkerställa att en ändring i systemet inte medför att andra delar i systemet påverkas negativt. Dokumenthanteringssystem hanterar ofta känslig data hos organisationer vilket ställer höga krav på säkerheten. Behörigheter i system måste därför testas noggrant för att säkerställa att data inte hamnar i fel händer. Dokumenthanteringssystem gör det möjligt för flera organisationer att samla sina resurser och kunskaper för att nå gemensamma mål. Gemensamma arbetsprocesser stöds med hjälp av arbetsflöden som innehåller ett antal olika tillstånd. Vid dessa olika tillstånd gäller olika behörigheter. När en behörighet ändras krävs regressionstester för att försäkra att ändringen inte har gjort inverkan på andra behörigheter. Denna studie har utförts som en kvalitativ fallstudie vars syfte var att beskriva utmaningar med regressionstestning av roller och behörigheter i arbetsflöden för dokument i dokumenthanteringssystem. Genom intervjuer och en observation så framkom det att stora utmaningar med dessa tester är att arbetsflödens tillstånd följer en förutbestämd sekvens. För att fullfölja denna sekvens så involveras en enorm mängd behörigheter som måste testas. Det ger ett mycket omfattande testarbete avseende bland annat tid och kostnad. Studien har riktat sig mot dokumenthanteringssystemet ProjectWise som förvaltas av Trafikverket. Beslutsunderlag togs fram för en teknisk lösning för automatiserad regressionstestning av roller och behörigheter i arbetsflöden åt ProjectWise. Utifrån en kravinsamling tillhandahölls beslutsunderlag som involverade Team Foundation Server (TFS), Coded UI och en nyckelordsdriven testmetod som en teknisk lösning. Slutligen jämfördes vilka skillnader den tekniska lösningen kan utgöra mot manuell testning. Utifrån litteratur, dokumentstudie och förstahandserfarenheter visade sig testautomatisering kunna utgöra skillnader inom ett antal identifierade problemområden, bland annat tid och kostnad. / Software testing is done in order to see whether the system meets specified requirements and to find bugs. It is an important part of system development and involves, among other things, regression testing. Regression tests are performed to ensure that a change in the system does not affect other parts of the system adversely. Document management systems often deals with sensitive data for organizations, which place high demands on safety. Permissions in the system has to be tested thoroughly to ensure that data does not fall into the wrong hands. Document management systems make it possible for organizations to pool their resources and knowledge together to achieve common goals. Common work processes are supported through workflows that contains a variety of states. These different permissions apply to different states. When a permission changes regression tests are required to ensure that the changes has not made an impact on other permissions. This study was conducted as a qualitative case study whose purpose was to describe the challenges of regression testing of roles and permissions in document workflows in a document management system. Through interviews and an observation it emerged that the major challenges of these tests is that workflow states follow a predetermined sequence. To complete this sequence, a huge amount of permissions must be tested. This provides a very extensive test work that is time consuming and costly. The study was directed toward the document management system ProjectWise, managed by Trafikverket. Supporting documentation for decision making was produced for a technical solution for automated regression testing of roles and permissions in workflows for ProjectWise. Based on a requirement gathering decision-making was provided that involved the Team Foundation Server (TFS), Coded UI and a keyword-driven test method for a technical solution. Finally, a comparison was made of differences in the technical solution versus today's manual testing. Based on literature, document studies and first hand experiences, test automation provides differences in a number of problem areas, including time and cost.
113

Distributed knowledge sharing and production through collaborative e-Science platforms / Partage et production de connaissances distribuées dans des plateformes scientifiques collaboratives

Gaignard, Alban 15 March 2013 (has links)
Cette thèse s'intéresse à la production et au partage cohérent de connaissances distribuées dans le domaine des sciences de la vie. Malgré l'augmentation constante des capacités de stockage et de calcul des infrastructures informatiques, les approches centralisées pour la gestion de grandes masses de données scientifiques multi-sources deviennent inadaptées pour plusieurs raisons: (i) elles ne garantissent pas l'autonomie des fournisseurs de données qui doivent conserver un certain contrôle sur les données hébergées pour des raisons éthiques et/ou juridiques, (ii) elles ne permettent pas d'envisager le passage à l'échelle des plateformes en sciences computationnelles qui sont la source de productions massives de données scientifiques. Nous nous intéressons, dans le contexte des plateformes collaboratives en sciences de la vie NeuroLOG et VIP, d'une part, aux problématiques de distribution et d'hétérogénéité sous-jacentes au partage de ressources, potentiellement sensibles ; et d'autre part, à la production automatique de connaissances au cours de l'usage de ces plateformes, afin de faciliter l'exploitation de la masse de données produites. Nous nous appuyons sur une approche ontologique pour la modélisation des connaissances et proposons à partir des technologies du web sémantique (i) d'étendre ces plateformes avec des stratégies efficaces, statiques et dynamiques, d'interrogations sémantiques fédérées et (ii) d'étendre leur environnent de traitement de données pour automatiser l'annotation sémantique des résultats d'expérience ``in silico'', à partir de la capture d'informations de provenance à l'exécution et de règles d'inférence spécifiques au domaine. Les résultats de cette thèse, évalués sur l'infrastructure distribuée et contrôlée Grid'5000, apportent des éléments de réponse à trois enjeux majeurs des plateformes collaboratives en sciences computationnelles : (i) un modèle de collaborations sécurisées et une stratégie de contrôle d'accès distribué pour permettre la mise en place d'études multi-centriques dans un environnement compétitif, (ii) des résumés sémantiques d'expérience qui font sens pour l'utilisateur pour faciliter la navigation dans la masse de données produites lors de campagnes expérimentales, et (iii) des stratégies efficaces d'interrogation et de raisonnement fédérés, via les standards du Web Sémantique, pour partager les connaissances capitalisées dans ces plateformes et les ouvrir potentiellement sur le Web de données. Mots-clés: Flots de services et de données scientifiques, Services web sémantiques, Provenance, Web de données, Web sémantique, Fédération de bases de connaissances, Intégration de données distribuées, e-Sciences, e-Santé. / This thesis addresses the issues of coherent distributed knowledge production and sharing in the Life-science area. In spite of the continuously increasing computing and storage capabilities of computing infrastructures, the management of massive scientific data through centralized approaches became inappropriate, for several reasons: (i) they do not guarantee the autonomy property of data providers, constrained, for either ethical or legal concerns, to keep the control over the data they host, (ii) they do not scale and adapt to the massive scientific data produced through e-Science platforms. In the context of the NeuroLOG and VIP Life-science collaborative platforms, we address on one hand, distribution and heterogeneity issues underlying, possibly sensitive, resource sharing ; and on the other hand, automated knowledge production through the usage of these e-Science platforms, to ease the exploitation of the massively produced scientific data. We rely on an ontological approach for knowledge modeling and propose, based on Semantic Web technologies, to (i) extend these platforms with efficient, static and dynamic, transparent federated semantic querying strategies, and (ii) to extend their data processing environment, from both provenance information captured at run-time and domain-specific inference rules, to automate the semantic annotation of ``in silico'' experiment results. The results of this thesis have been evaluated on the Grid'5000 distributed and controlled infrastructure. They contribute to addressing three of the main challenging issues faced in the area of computational science platforms through (i) a model for secured collaborations and a distributed access control strategy allowing for the setup of multi-centric studies while still considering competitive activities, (ii) semantic experiment summaries, meaningful from the end-user perspective, aimed at easing the navigation into massive scientific data resulting from large-scale experimental campaigns, and (iii) efficient distributed querying and reasoning strategies, relying on Semantic Web standards, aimed at sharing capitalized knowledge and providing connectivity towards the Web of Linked Data.
114

On the construction of decentralised service-oriented orchestration systems

Jaradat, Ward January 2016 (has links)
Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system's architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow.
115

Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis

Yuankun Fu (10223831) 29 April 2021 (has links)
<div>As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.</div><div><br></div><div>I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.</div><div><br></div><div>After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.</div>
116

Hur fabriksflöden kan visualiseras med hjälp av Unreal Engine

Bergström, Emmy, Lundberg, Robert January 2023 (has links)
Virtual Reality (VR) is a tool with great potential and is under constant development for use in new fields. The project Fabriksvisualisering (Factory Visualization), has within the digital factory field, developed a tool for companies to build their factories in Unreal Engine (UE) and VR. The tool gives companies the opportunity to test their factory layouts, before implementing in the real world, to avoid costly mistakes. The following report examines possibilities for users to simulate and visualize their factory workflows as a part of the project Fabriksvisualisering. To achieve this, different solutions for visualizing the flow of products, staff and vehicles have been explored. User tests were carried out to test how an effect from UE can be used to visualize a flow of products in VR. The result gives users the opportunity to experience the interactions with factory workflows and visualize how they flow in VR.  The project resulted in two options that visualizes product flow and four options that visualizes workflow. Out of these six solutions, three were chosen and implemented to the project Fabriksvisualisering. These solutions are based on the construction of splines and include both alternatives for product flows and one alternative for workflows. The selection was based on functionality, user-friendliness and how realistic the outcome is. The result gives users the opportunity to experience the interactions with factory workflows and visualize how they flow in VR.  The conclusion is that there are several ways to visualize four out of seven factory workflows. The flows that are possible to visualize are material handling of raw materials, semi-finished and finished products, as well as the transportation and movements of workforce. This can be visualized with the help of AI, a robot system within UE, the construction of splines and Niagara systems connected to splines.
117

Transforming Corporate Learning using Automation and Artificial Intelligence : An exploratory case study for adopting automation and AI within Corporate Learning at financial services companies / En ny era av utbildning genom automatisering och Artificiell Intelligens : En explorativ fallstudie kring möjligheten att implementera automatisering och AI inom utbildningsorganisationen på finansbolag

Klinga, Petter January 2020 (has links)
As the emergence of new technologies are continuously disrupting the way in which organizations function and develop, the majority of initiatives within Learning and Development (L&amp;D) are far from fully effective. The purpose of this study was to conduct an exploratory case study to investigate how automation and AI technologies could improve corporate learning within financial services companies. The study was delimited to study three case companies, all primarily operating in the Nordic financial services industry. The exploratory research was carried out through a literature review, several indepth interviews as well as a survey for a selected number of research participants. The research revealed that the current state of training within financial services is characterized by a significant amount of manual and administrative work, lack of intelligence within decision-making as well as a non-existing consideration of employee knowledge. Moreover, the empirical evidence similarly reveled a wide array of opportunities for adopting automation and AI technologies into the respective learning workflows of the L&amp;D organization within the case companies. / I takt med att företag kontinuerligt anammar nya teknologier för att förbättra sin verksamhet, befinner sig utbildningsorganisationer i ett märkbart ineffektivt stadie. Syftet med denna studie var att genomföra en explorativ fallstudie gällande hur finansbolag skulle kunna införa AI samt automatisering för att förbättra sin utbildningsorganisation. Studien var begränsat till att undersöka tre företag, alla med verksamhet i den nordiska finansbranschen. Den explorativa delen av studien genomfördes med hjälp av en litteraturstudie, flertal djupgående intervjuer samt en enkät för ett begränsat antal deltagare i forskningsprocessen. Forskning påvisade att den existerade utbildningsorganisationen inom finansbolag är starkt präglat av ett överflöd av manuellt och administrativt arbete, bristande intelligens inom beslutsprocesser samt en bristande hänsyn för existerande kunskapsnivåer bland anställda. Studien påvisade därtill en mängd möjligheter att införa automatisering samt AI för att förbättra utbildningsflödena inom samtliga deltagande bolag i fallstudien.
118

Designing scientific workflows following a structure and provenance-aware strategy

Chen, Jiuqiang 11 October 2013 (has links) (PDF)
Les systèmes de workflows disposent de modules de gestion de provenance qui collectent les informations relatives aux exécutions (données consommées et produites) permettant d'assurer la reproductibilité d'une expérience. Pour plusieurs raisons, la complexité de la structure du workflow et de ses d'exécutions est en augmentation, rendant la réutilisation de workflows plus difficile. L'objectif global de cette thèse est d'améliorer la réutilisation des workflows en fournissant des stratégies pour réduire la complexité des structures de workflow tout en préservant la provenance. Deux stratégies sont introduites. Tout d'abord, nous introduisons SPFlow un algorithme de réécriture de workflow scientifique préservant la provenance et transformant tout graphe acyclique orienté (DAG) en une structure plus simple, série-parallèle (SP). Ces structures permettent la conception d'algorithmes polynomiaux pour effectuer des opérations complexes sur les workflows (par exemple, leur comparaison) alors que ces mêmes opérations sont associées à des problèmes NP-difficile pour des structures générales de DAG. Deuxièmement, nous proposons une technique capable de réduire la redondance présente dans les workflow en détectant et supprimant des motifs responsables de cette redondance, nommés "anti-patterns". Nous avons conçu l'algorithme DistillFlow capable de transformer un workflow en un workflow sémantiquement équivalent "distillé", possédant une structure plus concise et dans laquelle on retire autant que possible les anti-patterns. Nos solutions (SPFlow et DistillFlow) ont été testées systématiquement sur de grandes collections de workflows réels, en particulier avec le système Taverna. Nos outils sont disponibles à l'adresse: https://www.lri.fr/~chenj/.
119

Användning av ChatGPT : En intervjustudie om generativ AI som ett interaktivt bollplank hos Utvecklare / Use of ChatGPT : An Interview Study on Generative AI as an Interactive Sounding Board for Developers

Berling, Kevin January 2024 (has links)
Utvecklare ställs inför ständigt mer komplexa problem och utmaningar som kräver innovativalösningar. Traditionellt har de använt kollegor, forum och dokumentation som resurser för attbolla idéer och utarbeta lösningar. Med de senaste framstegen inom artificiell intelligens (AI)har nya möjligheter öppnats upp. AI har potentialen att fungera inte bara som ensamtalspartner utan också som en kritiker och problemlösare i utvecklarens arbetsflöde.Denna studie utforskar interaktionen mellan utvecklare och ChatGPT i deras dagliga arbetemed ett särskilt fokus på hur ChatGPT kan användas som ett bollplank. Genom en kvalitativforskningsmetod, baserad på djupgående semistrukturerade intervjuer med nio utvecklare,undersöker studien hur dessa utvecklare integrerar ChatGPT i sina arbetsflöden och vilkaverktyg och metoder de använder för att underlätta denna interaktion. Analysen belyser ocksåChatGPT bidrag till problemlösning och beslutsfattande samt de tekniska och organisatoriskautmaningar som utvecklarna möter.Resultaten visar att ChatGPT är ett värdefullt verktyg för att förbättra kodkvaliteten, skapatemplates och boilerplate-kod samt för att effektivisera dokumentation ochöversättningsprocesser. Dock identifierades begränsningar såsom långsam respons, behovetav specifika formuleringar och svårigheter med att hantera komplexa eller nischade problem.Studien konkluderar att trots dessa utmaningar har ChatGPT en betydande potential attfungera som en konstruktiv partner i utvecklares dagliga arbete, vilket kan leda till ökadeffektivitet och förbättrad kvalitet i mjukvaruutvecklingsprocessen.
120

Migrating from integrated library systems to library services platforms : An exploratory qualitative study for the implications on academic libraries’ workflows

Grammenis, Efstratios, Mourikis, Antonios January 2018 (has links)
The present master thesis is an exploratory qualitative study in academic libraries regarding the transition from the integrated library systems to the next generation integrated library systems or library services platforms and the potential implications in their internal workflows. Nowadays, libraries all over the world are facing up with a number of challenges in terms of acquiring, describing and making available to the public all the resources, both printed and electronic, they manage. In particular, the academic libraries have more reasons to wish to fulfill their users’ needs since the majority of them use the library sources more and more for scientific research and educational purposes.In this study we attempt to explore the phenomenon in the globe using the available literature and to identify the implications in libraries’ workflows and the possible future developments. Moreover, through observation and semi-structured interviews we try to identify the current developments in the Greek context regarding the adoption of next ILS and possible implications in their workflows. Finally, we attempt a comparison between the Greek situation and the international one.

Page generated in 0.0465 seconds