• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 792
  • 178
  • 57
  • 2
  • 1
  • Tagged with
  • 1073
  • 1073
  • 709
  • 288
  • 208
  • 207
  • 207
  • 207
  • 151
  • 147
  • 134
  • 115
  • 88
  • 85
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

En studie över insättning av dropp i ambulans : droppets nödvändighet och förslag på förbättringar / A Study on Administration of Infusion in Ambulance : the Necessity of Infusion and Suggestionson Improvements

Grunditz, Anna, Elfström, Anna January 2011 (has links)
No description available.
62

Risker med patienters användning av elektrisk utrustning under hemodialysbehandling / Risks Involved with Patients’ Use of Electrical Equipment during Hemodialysis Treatment

Hedvall, Anders, Nordgren, Sofia January 2011 (has links)
No description available.
63

Grafiskt användargränssnitt för rörelsemätning av ryggraden / Graphical User Interface for Spinal Motion Measurement

Onoszko, Arthur, Manth, Rafael January 2011 (has links)
No description available.
64

Datorstödd navigering vid frakturkirurgi : Inventering och nya metoder / Computer-assisted Navigation during Fracture Surgery : Inventory and new methods

Jernberg, Cassandra, Gistvik, Helena January 2011 (has links)
No description available.
65

En jämförelse mellan elektroniska journalsystem för öppenvården / A Comparison of Electronic Health Record Systems for Outpatient Care

Friberg, Daniel, Johansson, Martin January 2011 (has links)
No description available.
66

Mätning av kranskärlet LAD:s slingrighet : en pilotstudie / Measuring Tortuosity of the LAD Coronary Artery : a Pilot Study

Illerstam, Fredrik, Morén, David January 2011 (has links)
No description available.
67

A flexible resonance sensor system for detection of cancer tissue : evaluation on silicon

Åstrand, Anders P. January 2012 (has links)
The most common form of cancer among men in Europe and the US is prostate cancer. When a radical prostatectomy has been found necessary, it is of interest to examine the prostate, as tumour tissue on the capsule might indicate that the cancer has metastased. This is commonly done by a microscope-based morphometric investigation. Tumour tissue is normally stiffer than healthy tissue. Sensors based on piezoelectric resonance technology have been introduced into the medical field during the last decade. By studying the change in resonance frequency when a sensor comes into contact with a material, conclusions can be drawn about the material. A new and flexible measurement system using a piezoelectric resonance sensor has been evaluated. Three translation stages, two for horizontal movements and one for vertical movement, with stepper motors are controlled from a PC. A piezoelectric resonance element and a force sensor are integrated into a sensor head that is mounted on the vertical translation stage. The piezoelectric element is connected to a feed-back circuit and resonating at its resonance frequency until it comes into contact with a material, when a frequency shift can be observed. The force sensor is used to measure the applied force between the sensor and the material. These two parameters are combined into a third, called the stiffness parameter, which is important for stiffness evaluation. For measurements on objects with different geometries, the vertical translation stage can be aimed at a platform for flat objects or a fixture for spherical objects. The vertical translation stage is mounted on a manual rotational stage with which the contact angle between the sensor and the measured surface can be adjusted. The contact angles covered are between 0° and 35° from a line perpendicular to the surface of the measured object. The measured objects used were made from silicones of different stiffness and in the shape of flat discs and spheres. The indentation velocity of the sensor can be set at 1 mm/s to 5 mm/s. In the three papers that are the base for this licentiate thesis, we have investigated the dependence of the frequency shift, the applied force and the stiffness parameter on the contact angle, and the indentation velocity at different impression depths. The maximum error for the measurement system has also been determined. The results of the measurements indicate that great care must be taken when aiming the sensor against the surface of the point where the measurements are to be performed. Deviations in contact angle of more than iv±10° from a line perpendicular to the surface will result in an underestimation of the frequency shift, meaning that the tissue will be regarded as stiffer than it really is. This result is important as the flat silicone models have a very even surface, which makes a controlled contact angle possible. Biological tissue can have a rough and uneven surface, which can lead to unintentional deviations in the contact angle. The magnitude of the stiffness parameter is favoured by a high indentation velocity compared to a low. The evaluation of this measurement system has shown that it is possible to distinguish between soft and stiff silicone models, which have been used in this initial phase of the study. A new feature in this measurement system is the fixture that makes measurements on spherical objects possible and the possibility to vary the angle of contact. This is promising for future studies and measurements on whole prostate in vitro. A future application for this measurement system is to aid surgeons performing radical prostatectomy in the search for tumour tissue on the capsule of the prostate, as the presence of tumour tissue can indicate that the cancer has spread to the surrounding tissue.
68

En jämförande studie mellan traditionell hudvecksmätning (Harpenden) och ultraljudsmätning (BodyMetrix) för bestämning av kroppssammansättning / A comparative study between traditional skinfold measurement (Harpenden) and ultrasound measurement (BodyMetrix) for estimation of body composition

Sömskar, Johanna, Henningsson, Julia January 2023 (has links)
Background:Biomedical technology is a rapidly evolving field. The development of new technologies andmethods taking place at the same time as the range of products increases. It is not uncommonfor there to be multiple apparatuses for the same purpose, and the field of body composition isno exception. Weight fluctuation is natural and the reasons behind can be various. It could beunderlying diseases, living habits or level of physical activity. Understanding of causal factors isimportant, for instance to identify health risks and designing of diet plans or training schedules. Aim:The aim of this study is to compare two different apparatuses, Harpenden and BodyMetrix,against a third reference apparatus, BodPod. Which of these two relates best to the referenceapparatus? All apparatuses are used for estimation of body composition and different measuringmethods are used in every apparatus. The parameter of interest is fat percentage, and the resultwill be analyzed from two perspectives, women and men together and women and menseparately. Method:This study consists of three different stages, a literature study, an educational phase, and anempirical study. The literature study formed the basis for this study and provided useful andreliable information about the subject and technical aspects of utilised apparatuses. Theeducational phase was designed to give the authors an opportunity to learn how to manage theapparatuses before entering the empirical study. The empirical study was designed to providenecessary data for later comparison. 16 individuals participated in the empirical study andmeasurements with Harpenden and BodyMetrix where performed based on Jackson andPollock’s three-site method. Result and conclusion:This study found that Harpenden related better against the reference apparatus than BodyMetrixfor both women and men together and women and men separately. Deviation of varying levelswas observed but the error from Harpenden was more consistent while the error from BodyMetrixwas more scattered.
69

Inclusion of more sources in a commercial brachytherapy Monte Carlo

Wiman, Philip, Forsberg, Emil, Vestling, Robin January 2023 (has links)
Brachytherapy is a cancer treatment method where a solid radioactive source is placed inside the body in order to target cancer cells with a high dose of radiation while minimizing damage to surrounding healthy tissue. The most common method for treatment planning in brachytherapy is the TG43 formalism where the whole patient geometry is treated as water. Using TG43 leads to inaccuracies when trying to model materials that differ from water, such as air, bone or metal implants. A method that can accurately model particle transport in different media is the Monte Carlo (MC) method. In order to reduce the computational work required to perform MC simulations, pre-simulated phase-space files can be used which contain information about the initial particles position, direction and energy. In this work, four different brachytherapy sources were simulated and analyzed using the MC simulation software egs_brachy which is a general purpose and open-source code base. The four sources are referred to as Flexisource, Bebig, VS2000 and Ytterbium. The main objective was to create 4D histograms of the phase-space files and find a binning that could provide a correct dose distribution while maintaining a small file size. The histograms were then used as input for the Monte Carlo dose engine in the treatment planning system RayStation®. Two types of validations of the histograms were performed. Firstly, the Monte Carlo engine was compared to TG43 in a water patient. Secondly, each of the four sources were simulated in egs_brachy placed both inside and outside of a slab of material surrounded by water. The materials were lung, bone and tungsten. The egs_brachy simulations were then imported to RayStation® and compared against the Monte Carlo engine. The results for the material slabs were similar for the four sources. A notable difference between the egs_brachy dose and the Monte Carlo engine could be seen on the interface of the material slabs. The tungsten case showed a bigger difference than the other two materials. Overall the results were considered promising.
70

Extracting Adverse Drug Reactions from Product Labels using Deep Learning and Natural Language Processing / Detektering av läkemedels biverkningar i bipacksedel med hjälp av maskininlärning

Bista, Shachi January 2020 (has links)
Pharamacovigilance relates to activities involving drug safety monitoring in the post-marketing phase of the drug development life-cycle. Despite rigorous trials and experiments that drugs undergo before they are available in the market, they can still cause previously unobserved side-effects (also known as adverse events) due to drug–drug interaction, genetic, physiological or demographic reasons. The Uppsala Monitoring Centre (UMC) is the custodian of the global reporting system, VigiBase, for adverse drug reactions in collaboration with the World Health Organization (WHO). VigiBase houses over 20 million case reports of suspected adverse drug reactions from all around the world. However, not all case reports that the UMC receives pertains to adverse reactions that are novel in the safety profile of the drugs. In fact, many of the reported reactions found in the database are known adverse events for the reported drugs. With more than 3 million potential associations between all possible drugs and all possible adverse events present in the database, identifying associations that are likely to represent previously unknown safety concerns requires powerful statistical methods and knowledge of the known safety profiles of the drugs. Therefore, there is a need for a knowledge base with mappings of drugs to their known adverse reactions. To-date, such a knowledge base does not exist. The purpose of this thesis is to develop a deep-learning model that learns to extract adverse reactions from product labels — regulatory documents providing the current state of knowledge of the safety profile of a given product — and map them to a standardized terminology with high precision. To achieve this, I propose a two-phase algorithm, with a first scanning phase aimed at finding regions of the text representing adverse reactions, and a second mapping phase aiming at normalizing the detected text fragments into Medical Dictionary for Regulatory Activities (MedDRA) terms, the terminology used at the UMC to represent adverse reactions. A previous dictionary-based algorithm developed at the UMC achieved a scanning F1 of 0.42 (0.31 precision, 0.66 recall) and mapping macro-averaged F1 of 0.43 (0.39 macro-averaged precision, 0.64 macro-averaged recall). State-of-the-art methods achieve F1 above 0.8 and above 0.7 for the scanning and mapping problems respectively. To develop algorithms for adverse reaction extraction, I use the 2019 ADE Evaluation Challenge data, a dataset made by the FDA with 100 product labels annotated for adverse events and their mappings to MedDRA. This thesis explores three architectures for the scanning problem: 1) a Bidirectional Long Short-Term Memory (BiLSTM) encoder followed by a softmax classifier, 2) a BiLSTM encoder with Conditional Random Field (CRF) classifier and finally, 3) a BiLSTM encoder with CRF classifier with Embeddings from Language Model (ELMo) embeddings. For the mapping problem, I explore Information Retrieval techniques using the search engines whoosh and Solr, as well as a Learning to Rank algorithm. The BiLSTM encoder with CRF gave the highest performance on finding the adverse events in the texts, with an F1 of 0.67 (0.75 precision, 0.61 recall), representing a 0.06 absolute increase in F1 over the simpler BiLSTM encoder with softmax. Using the ELMo embeddings was proven detrimental and lowered the F1 to 0.62. Error analysis revealed the adopted Inside, Beginning, Outside (IOB2) labelling scheme to be poorly adapted for denoting discontinuous and compound spans while introducing ambiguity in the training data. Based on the gold standard annotated mappings, I also evaluated the whoosh and Solr search engines, with and without Learning to Rank. The best performing search engine on this data was Solr, with a macro-averaged F1 of 0.49 compared to the macro-averaged F1 of 0.47 for the whoosh search engine. Adding a Learning to Rank algorithm on top of each engine did not improve mapping performance, as both macro-averaged F1 dropped by over 0.1 when using the re-ranking approach. Finally, the best performing scanning and mapping algorithms beat the aforementioned dictionary-based baseline F1 by 0.25 in the scanning phase and 0.06 in the mapping phase. A large source of error for the Solr search engine came from tokenisation issues, which had a detrimental impact on the performance of the entire pipeline. In conclusion, modern Natural Language Processing (NLP) techniques can significantly improve the performance of adverse event detection from free-formtext compared to dictionary-based approaches, especially in cases where context is important. / Farmakovigilans berör de aktiviteter som förbättrar förståelsen av biverkningar av läkemedel. Trots de stränga prövningar som behövs för läkemedelsutvecklingen finns ändå en del biverkningar som är okända p.g.a. genetik, fysiologiska eller demografiska faktorer. Uppsala Monitoring Centre (UMC), i samarbete med World Health Organization (WHO) är vårdnadshavare till den globala databasen av rapporter på medicinska biverkningar, VigiBase. VigiBase innehåller över 20 miljoner misstänkta rapporter från hela världen. Dock, en andel av dessa rapporter beskriver biverkningar som är redan kända. Egentligen finns det över 3 miljoner potentiella samband mellan alla läkemedel och biverkningar i databasen. Att hitta den riktiga och okända biverkningar behövs kraftfulla statistiska metoder samt kunskap om det kända säkerhetsprofil av läkemedlet. Det finns ett behöv för ett databas som kartlägger läkemedel med alla kända biverkningar men, inget sådant databas finns idag. Syftet med detta examensarbete är att utveckla en djup-lärandemodell som kan läsa av texter på läkemedels etiketter — tillsynsdokument som beskriver säkerhetsprofil av läkemedel — och kartlägga dem till ett standardiserat terminologi med hög precision. Problemet kan brytas in i två fas, den första scanning och den andra mapping. Scanning handlar om att kartlägga position av text-fragmentet i etiketter. Mapping handlar om att kartlägga de detekterade text-fragmentet till Medical Dictionary for Regulatory Activities (MedDRA), den terminologi som används i UMC för biverkningar. Tidigare försök, s.k. dictionary-based approach på UMC uppnådde scanning F1 i 0,42 (0,31 precision; 0,64 recall) och mapping macro-averaged F1 i 0,43 (0,39 macro-averaged precision; 0,64 macro-averaged recall). De bästa systemen (s.k. state-of-the-art) uppnådde scanning F1 över 0,8 och 0,7 för den scanning respektive mapping problemet. Jag använder den 2019 ADE Evaluation Challenge dataset att utveckla algoritmerna i projektet. Detta dataset innehåller 100 läkemedels etiketter annoterad med biverkningar och deras kartläggning i MedDRA. Denna avhandling utforskar tre arkitekturer till scanning problemet: 1) Bidirectional Long Short-Term Memory (BiLSTM) och softmax för klassificering, 2) BiLSTM med Conditional Random Field (CRF) klassificering och, till sist, 3) BiLSTM med CRF klassificering och Embeddings from Language Model (ELMo) embeddings. Med avseende till mapping problematiken utforskar jag metoder inom Information Retrieval genom användning av sökmotorerna whoosh och Solr. För att förbättra prestandan i mapping utforskar jag Learning to Rank metoder. BiLSTM med CRF presterade bäst inom scanning problematiken med F1 i 0,67 (0,75 precision; 0,61 recall) som är ett 0,06 absolut ökning över den BiLSTM encoder med softmax klassificering. Med ELMo försämrade F1 till 0,62. Analys av felet visade att Inside, Beginning, Outside (IOB2) märkning som jag har valt att använda passar inte till att beteckna diskontinuerliga och sammansatta spans, och tillför betydande osäkerhet i träningsdata. Med avseende till mapping problematiken har jag kollat på sökmotorn Solr och whoosh, med, och utan Learning to Rank. Solr visade sig som den bäst presterande sökmotorn med macro-averaged F1 i 0,49 jämfört med whoosh som visade macro-averaged F1 i 0,47. Learning to Rank algoritmerna försämrade F1 med över 0,1 för båda sökmotorer. Den bäst presterande scanning och mapping algoritmer slog den baseline systemets F1 med 0,25 i scanning faset, och 0,06 i mapping fasen. Ett stor källa av fel för den Solr sökmotorn har kommit från tokeniserings-fel, som hade en försämringseffekt i prestanda genom hela pipelinen. I slutsats, moderna Natural Language Processing (NLP) tekniker kan kraftigt öka prestanda inom detektering av biverkningar från etiketter och texter, jämfört med gamla dictionary metoder, särskilt när kontexten är viktigt.

Page generated in 0.0668 seconds