• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Application of Electrospray Mass Spectrometry Toward Semi-Automated Kinase Inhibitor Screening

Partserniak, Ivan 08 1900 (has links)
<P> Multi-site phosphorylation of protein targets by specific kinases is a common event used to propagate biological messages through signal transduction pathways in the context of the cellular environment and is a vital regulatory mechanism for many metabolic processes. Recent advances in the study of the protein glycogen synthase kinase-3 (GSK-3) have shed some light on the intricate role this enzyme plays within the framework ofmammalian cellular metabolism. Abnormal behaviour of GSK -3 profoundly impacts cellular function, and is implicated in Alzheimer's disease and the development of Type II Diabetes. A key issue in assaying the activity of GSK-3 is the ability to distinguish between singly and multiply phosphorylated substrates, as this enzyme has the ability to selectively phosphorylate a previously phosphorylated (primed) substrate. Given the serious nature of the disorders caused by the dysfunction of this kinase, high throughput screening of specific inhibitors from compound libraries is urgently needed. Unfortunately, many of the currently existing kinase screening technologies are geared towards monitoring single phosphorylation events and thus, are not be amenable to effective assaying of multiply phosphorylated substrates. In this thesis, a novel, solution-based assay method based on electrospray ionizationtandem mass spectrometry (ESI-MS/MS) is developed as a platform for inhibitor screening with full consideration being given to the specific nature of GSK-3 substrates and products. The semi-automated application of this assay is possible using an in-line autosampler, and is shown to be a potentially effective means for screening primed binding site inhibitors from compound mixtures, with subsequent deconvolution performed to isolate the effective molecule. Optimization of the MS-based assay required significant alterations in buffer conditions compared to those used in the standard GSK-3 radioassay based on y-32P ATP, owing to the inability of electrospray ionization to tolerate high buffer concentrations. Preliminary screening of mixtures was demonstrated, and expansion to screening of large compound libraries consisting of previously untested compounds and natural product extracts should be possible. </p> <p> To investigate the adaptation of the GSK-3 MS/MS assay to allow mixture deconvolution, a preliminary study was performed on the utilization of sol-gel technology for entrapment of GSK-3 to develop a solid-phase affinity assay for pull-down of bioactive ligands identified in enzyme activity assays. This method requires the preservation of enzyme function within the silica matrix, which has not been previously demonstrated for GSK-3. The sol-gel entrapment of GSK-3, however, proved to be problematic. Implementation of a flow-through assay using immobilized GSK-3 was hampered by issues such as non-specific adsorption of the cationic substrate and inhibitors, owing to electrostatic interactions with the anionic silica matrix used for enzyme entrapment. Future work aimed at further developing and optimizing the sol-gel materials and processing methods are proposed. </p> / Thesis / Master of Science (MSc)
2

Capturing semi-automated decision making : the methodology of CASADEMA

Nilsson, Maria January 2010 (has links)
This thesis presents a new methodology named CASADEMA (CApturing Semi-Automated DEcision MAking) which captures the interaction between humans and the technology they use to support their decision-making within the domain of Information Fusion. We are particularly interested in characterising the interaction between human decision makers and artefacts in semi-automated fusion processes. In our investigation we found that the existing approaches are limited in their ability to capture such interactions in sufficient detail. The presented method is built upon a distributed-cognition perspective. The use of this particular theoretical framework from cognitive science enables the method to take into account not only the role of the data captured in the physical and digital artefacts of the fusion system (e.g., radar readings, information from a fax or database, a piece of paper, etc.), but also the cognitive support function of the artefacts themselves (e.g., as an external memory) as part of the fusion process. That is, the interdependencies between the fusion process and decision-making can be captured. This thesis thus contributes to two main fields. Firstly, it enables, through CASADEMA, a distributed-cognition perspective of fusion processes in the, otherwise, rather technology-oriented field of Information Fusion. This has important conceptual implications, since it views fusion processes as extending beyond the boundary of physical/computer systems, to include humans, technology, and tools, as well as the interactions between them. It is argued that a better understanding of these interactions can lead to a better design of fusion processes, making CASADEMA an important contribution to the information fusion field. Secondly, the thesis provides, again in the form of CASADEMA, a practical application of the distributed-cognition theoretical framework. Importantly, the notations and definitions introduced in CASADEMA structure the otherwise currently rather loosely defined concepts and approaches in distributed cognition research. Hence, the work presented here also contributes to the fields of cognitive science and human-computer interaction. / <p>Examining Committee: Henrik Artman, Docent (Kungliga tekniska högskolan), Nils Dahlbäck, Professor (Linköpings universitet), Anna-Lisa Osvalder, Professor (Chalmers tekniska högskola)</p>
3

Subword Spotting and Its Applications

Davis, Brian Lafayette 01 May 2018 (has links)
We propose subword spotting, a generalization of word spotting where the search is for groups of characters within words. We present a method for performing subword spotting based on state-of-the-art word spotting techniques and evaluate its performance at three granularitires (unigrams, bigrams and trigrams) on two datasets. We demonstrate three applications of subword spotting, though others may exist. The first is assisting human transcribers identify unrecognized characters by locating them in other words. The second is searching for suffixes directly in word images (suffix spotting). And the third is computer assisted transcription (semi-automated transcription). We investigate several variations of computer assisted transcription using subword spotting, but none achieve transcription speeds above manual transcription. We investigate the causes.
4

Subword Spotting and Its Applications

Davis, Brian Lafayette 01 May 2018 (has links)
We propose subword spotting, a generalization of word spotting where the search is for groups of characters within words. We present a method for performing subword spotting based on state-of-the-art word spotting techniques and evaluate its performance at three granularitires (unigrams, bigrams and trigrams) on two datasets.We demonstrate three applications of subword spotting, though others may exist. The first is assisting human transcribers identify unrecognized characters by locating them in other words. The second is searching for suffixes directly in word images (suffix spotting). And the third is computer assisted transcription (semi-automated transcription). We investigate several variations of computer assisted transcription using subword spotting, but none achieve transcription speeds above manual transcription. We investigate the causes.
5

Semi-automated annotation and active learning for language documentation

Palmer, Alexis Mary 03 April 2013 (has links)
By the end of this century, half of the approximately 6000 extant languages will cease to be transmitted from one generation to the next. The field of language documentation seeks to make a record of endangered languages before they reach the point of extinction, while they are still in use. The work of documenting and describing a language is difficult and extremely time-consuming, and resources are extremely limited. Developing efficient methods for making lasting records of languages may increase the amount of documentation achieved within budget restrictions. This thesis approaches the problem from the perspective of computational linguistics, asking whether and how automated language processing can reduce human annotation effort when very little labeled data is available for model training. The task addressed is morpheme labeling for the Mayan language Uspanteko, and we test the effectiveness of two complementary types of machine support: (a) learner-guided selection of examples for annotation (active learning); and (b) annotator access to the predictions of the learned model (semi-automated annotation). Active learning (AL) has been shown to increase efficacy of annotation effort for many different tasks. Most of the reported results, however, are from studies which simulate annotation, often assuming a single, infallible oracle. In our studies, crucially, annotation is not simulated but rather performed by human annotators. We measure and record the time spent on each annotation, which in turn allows us to evaluate the effectiveness of machine support in terms of actual annotation effort. We report three main findings with respect to active learning. First, in order for efficiency gains reported from active learning to be meaningful for realistic annotation scenarios, the type of cost measurement used to gauge those gains must faithfully reflect the actual annotation cost. Second, the relative effectiveness of different selection strategies in AL seems to depend in part on the characteristics of the annotator, so it is important to model the individual oracle or annotator when choosing a selection strategy. And third, the cost of labeling a given instance from a sample is not a static value but rather depends on the context in which it is labeled. We report two main findings with respect to semi-automated annotation. First, machine label suggestions have the potential to increase annotator efficacy, but the degree of their impact varies by annotator, with annotator expertise a likely contributing factor. At the same time, we find that implementation and interface must be handled very carefully if we are to accurately measure gains from semi-automated annotation. Together these findings suggest that simulated annotation studies fail to model crucial human factors inherent to applying machine learning strategies in real annotation settings. / text
6

Examining the Effects of Discussion Strategies and Learner Interactions on Performance in Online Introductory Mathematics Courses: An Application of Learning Analytics

Lee, Ji Eun 01 August 2019 (has links)
This dissertation study explored: 1) instructors’ use of discussion strategies that enhance meaningful learner interactions in online discussions and student performance, and 2) learners’ interaction patterns in online discussions that lead to better student performance in online introductory mathematics courses. In particular, the study applied a set of data mining techniques to a large-scale dataset automatically collected by the Canvas Learning Management System (LMS) for five consecutive years at a public university in the U.S., which included 2,869 students enrolled in 72 courses. First, the study found that the courses that posted more open-ended prompts, evaluated students’ discussion messages posted by students, used focused discussion settings (i.e., allowing a single response and replies to that response), and provided more elaborated feedback had higher students final grades than those which did not. Second, the results showed the instructors’ use of discussion strategies (discussion structures) influenced the quantity (volume of discussion), the breadth (distribution of participation throughout the discussion), and the quality of learner interactions (levels of knowledge construction) in online discussions. Lastly, the results also revealed that the students’ messages related to allocentric elaboration (i.e., taking other peers’ contributions in argumentive or evaluative ways) and application (i.e., application of new knowledge) showed the highest predictive value for their course performance. The findings from this study suggest that it is important to provide opportunities for learners to freely discuss course content, rather than creating a discussion task related to producing a correct answer, in introductory mathematics courses. Other findings reported in the study can also serve as guidance for instructors or instructional designers on how to design better online mathematics courses.
7

Adaptable Semi-Automated 3D Segmentation Using Deep Learning with Spatial Slice Propagation / Anpassningsbar halvautomatiserad 3D-segmentering med hjälp av djupinlärning och spatiell skiktpropagering

Agerskov, Niels January 2019 (has links)
Even with the recent advances of deep learning pushing the field of medical image analysis further than ever before, progress is still slow due to limited availability of annotated data. There are multiple reasons for this, but perhaps the most prominent one is the amount of time manual annotation of medical images takes. In this project a semi-automated algorithm is proposed, approaching the segmentation problem in a slice by slice manner utilising the prediction of a previous slice as a prior for the next. This both allows the algorithm to segment entirely new cases and gives the user the ability to correct faulty slices, propagating the correction throughout. Results on par with current state of the art is achieved within the domain of the training data. In addition to this, cases outside of the training domain can also be segmented with some accuracy, paving the way for further improvement. The strategy for training the network to utilise auxiliary input lies in the heavy online data augmentation, forcing the network to rely on the provided prior. / Trots att framstegen inom djupinlärning banar vägen för medicinsk bildanalys snabbare än någonsin så finns det ett stort problem, mängden annoterad bilddata. Det har bland annat att göra med att medicinsk bilddata tar väldigt lång tid att annotera manuellt. I detta projektet har en semi-automatisk algoritm utvecklats som tar sig an 3D-segmentering från ett 2D-perspektiv. En bildvolym segmenteras genom att en initialiseringbild annoteras manuellt och används som hjälp för att annotera närliggande bilder i volymen. Detta upprepas sedan för resterande bilder men istället för att manuellt annotera används föregående segmentering av närverket som hjälp. Detta tillåter att algoritmen både kan generalisera till helt nya fall som ej är representerade av träningsdatan, och gör även att felaktigt segmenterade bilder kan korrigeras i efterhand. Korrigeringar kommer då att propageras genom volymen genom att varje segmentering används som hjälp för nästkommande bild. Resultaten är i nivå med motsvarande helautomatiska algoritmer inom träningsdomänen. Den största fördelen gentemot dessa är möjligheten att segmentera helt nya fall. Metoden som används för att träna nätverket att förlita sig på hjälpbilder bygger på kraftig bilddistortion av bilden som ska segmenteras. Detta tvingar nätverket att ta vara på informationen i segmenteringen av föregående bild.
8

Desenvolvimento de um sistema semi-automático para coleta e fracionamento do plâncton, medição de variáveis físicas e químicas da água e determinação do espectro de tamanho e biomassa do zooplâncton / Development of semi-automatic system for sampling and fractioning of plankton, measurement of chemical and physical variables of water, and determination of the size spectra and biomass of plankton

Arantes Junior, João Durval 22 December 2006 (has links)
Um dos principais problemas relacionados aos estudos limnológicos realizados manualmente em laboratório consiste no grande esforço, tempo de análise e trabalho especializado necessário. Esses fatores limitam a quantidade de amostras possíveis de serem analisadas em um determinado estudo, já que recursos sejam eles reagentes, recursos financeiros ou tempo são limitados. No presente trabalho foi utilizado um sistema semi-automatizado de medidas de variáveis físicas e químicas da água. O sistema é composto por uma sonda multi-parâmetro (Horiba U-22) e um sistema de posicionamento global (GPS) acoplados a um micro-computador, que realizam medidas georeferenciadas em curtos intervalos de tempo, permitindo um rastreamento horizontal das características da água. Foi ainda desenvolvido um sistema semi-automático para coleta fracionada da comunidade planctônica por meio de bomba de sucção operada por bateria e filtro coletor com rede de plâncton de diferentes aberturas de malha. O material coletado foi fotografado por meio de sistema de aquisição digital de imagens (microscópio Zeiss equipado com câmera AxionCan). Neste trabalho foi produzido um software (Planktonscan) que a partir da análise das imagens capturadas permite produzir dados com estimativas das medidas e dimensões dos organismos, calcular biovolumes e, utilizando fatores de conversão, estimar os valores de biomassa. O software apresenta uma interface para identificação, calcula a densidade dos organismos e produz relatório gráfico com informações sobre os organismos individuais e sobre a comunidade. Os equipamentos e o software foram testados em análises limnológicas e amostragem de plâncton no reservatório do Monjolinho, São Carlos, SP, em dezembro de 2005. Os resultados obtidos foram comparados com os disponíveis na literatura e demonstraram a aplicabilidade do sistema. / A major problem associated with the study of planktonic communities lies on the difficulties of analyzing the collected material, a long time-consuming procedure. Biomass determination is also a step requiring great effort and is subjected to large errors. In the present work a semi-automated system for measuring physical and chemical variables in the water was developed. The system is made up by a flow-pump, a multi-parameter probe and a global positioning system coupled to a microcomputer that performs measurements at short time intervals, allowing a horizontal tracking of the water quality, in much shorter times than traditional methods. Another semi-automated device was developed for collecting separate plankton size fractions. It uses a battery operating suction-pump coupled to a filter with different mesh nets. The collected materials are then submitted to image computer acquisition (Axion Vision Zeiss System). Additionally, in this study a software was produced (Planktonscan), that taking the measures of individuals dimensions (length, width and height) calculates biovolume and using conversion factors calculate the biomass for each zooplankton organism identified in the sample. Both systems were tested, regarding the measurement of limnological variables and plankton sampling, in the Monjolinho Reservoir, SP. The performance was good, resulting in a larger number of points sampled (60) in a shorter sampling time (1 hour) than those usually required. The biomass results provided by Planktonscan software were compared to data from literature, obtained by the traditional gravimetric method for dry weight determination and also with data generated from the use of mathematical models (length dry-weight regressions) available. The results were expressed as species population densities, biomasses and size spectra, evidencing the applicability of the models here developed.
9

A comparison of whole life cycle costs of robotic, semi-automated, and manual build airport baggage handling systems

Bradley, Alexandre January 2013 (has links)
This thesis proposes that a baggage handling system (BHS) environment can be defined and coupled to a whole life cycle cost (WLCC NPV) model. The results from specific experiments using the model can be used as the basis by which to commercially compare BHS flight build types of any capacity, and BHS geographical location. The model examined the three flight build types(i): Fully automatic build2; (ii) Semi-automatic build, and(iii); Manual build. The model has the ability to calculate a bag flow busy hour rate, and to replicate the baggage flow characteristics observed within real BHS operations. Whole life cycle costs (WLCC NPV) results are produced, and these form the basis by which the comparison of BHS types is made. An overall WLCC NPV scatter diagram was produced, which is a summation of each of the test sensitivities. The assumptions and limitations of the analysis are provided. It is proposed that the results, conclusions and recommendations shall be of value to airports, airlines, and design consultants.
10

A Bdi-based Multiagent Simulation Framework

Yukselen, Murat 01 October 2008 (has links) (PDF)
Modeling and simulation of military operations are becoming popular with the widespread application of artificial intelligence methods. As the decision makers would like to analyze the results of the simulations in greater details, entity-level simulation of physical world and activities of actors (soldiers, tanks, etc) is unavoidable. In this thesis, a multiagent framework for simulating task driven autonomous activities of actors or group of actors is proposed. The framework is based on BDI-architecture where an agent is composed of beliefs, goals and plans. Besides, an agent team is organized hierarchically and decisions at different levels of the hierarchy are governed by virtual command agents with their own beliefs, goals and plans. The framework supports an interpreter that realizes execution of single or multiagent plans coherently. The framework is implemented and a case study demonstrating the capabilities of the framework is carried out.

Page generated in 0.0731 seconds