• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 53
  • 45
  • 26
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 497
  • 209
  • 208
  • 208
  • 208
  • 208
  • 77
  • 77
  • 57
  • 52
  • 49
  • 42
  • 40
  • 39
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

A cultural education model : design and implementation of adaptive multimedia interfaces in eLearning

Stewart, Craig January 2012 (has links)
This thesis presents research performed over the span of 9 years in the area of adaptive multimedia interfaces (specifically Adaptive Hypermedia in eLearning), with special focus on a cultural education model. In particular, the thesis looks at how the adaptive interfaces can cater for cultural diversity in education, instead of presenting a homogenous delivery for the whole student population, regardless of their cultural background. Specifically, this research provides a framework for cultural adaptation, CAE (Cultural Artefacts in Education), based on Marcus & Gould’s web model, as well as its source, Hofstede’s indexes. This framework is supported by a questionnaire, the CAE questionnaire, a key product of this research, which has been shown to map on Hofstede’s indexes, and which has been used to model features for personalised adaptive interfaces for different cultures. The questionnaire is in English language, but this work also presents a study showing to what extent the results obtained are similar to native language questionnaire results. The CAE Framework is further extended by providing two ontologies, a full-scale ontology, called the CAE-F ontology, and a light-weight ontology, called the CAE-L ontology. These ontologies detail the HCI (Human Computer Interaction) features that need to be integrated into an adaptive system in order to cater for cultural adaptation. These features can be used for all types of adaptation, as defined in adaptive hypermedia. The latter ontology is then illustrated in a study of eleven countries, for the specific cultural adaptation case of interface adaptation, of which current research is extremely sparse. These illustrations are further used in a formative evaluation, which establishes to what extent the cultural adaptation ontologies can be applied. This is followed by a summative, real-life evaluation of cultural adaptation for Romanian students, and the results are reported and discussed. This study validates the proof of concept for using CAE in a real world setting. Finally, the overall achievements of this work are summarised, conclusions are drawn, and recommendation for further research are done.
322

Educational games to engage the un-engageable

Carr, John January 2011 (has links)
Behavioural, emotional and social difficulties in school-aged children are a significant problem in the UK. Such children represent a difficult challenge for educational institutions. Teachers and experts have said that these children find it almost impossible to stay on task in educational activities for more than a trivial amount of time. Interest in computer games as a medium for learning and other non-entertainment purposes has risen significantly in recent years, in part because they can provide an engaging experience to motivate users. This makes the medium an attractive tool for this demographic. There are many problems however facing designers who would attempt to integrate educational content into a game platform. The issue of effective integration between game and education has long been a problematic issue affecting educational game development. Gameplay aspects can often be overlooked in academic projects. Good educational games should integrate the learning content and game experience, this is something that is particularly difficult to achieve effectively. This thesis details a study to design educational games to aid behavioural emotional and social learning. The methodology attempts to blend good game design principles with educational content in such a way that users can be engaged with both the activity and the educational concepts contained within. Two trials were undertaken in schools with participants suffering from a range of severe behavioural emotional or social problems. The results provide evidence suggesting that, if educational gameplay is achieved, these children can be engaged, not only with the game as an activity, but with the educational content on which it is based. The implications are then explored and the potential of educational gameplay evaluated in context of the wider industry of educational and serious games. While this method of integrating educational content within game platforms is effective, it is difficult to achieve, in many subject areas, perhaps prohibitively so.
323

The suitability of the dendritic cell algorithm for robotic security applications

Oates, Robert Foster January 2010 (has links)
The implementation and running of physical security systems is costly and potentially hazardous for those employed to patrol areas of interest. From a technial perspective, the physical security problem can be seen as minimising the probability that intruders and other anomalous events will occur unobserved. A robotic solution is proposed using an artificial immune system, traditionally applied to software security, to identify threats and hazards: the dendritic cell algorithm. It is demonstrated that the migration from the software world to the hardware world is achievable for this algorithm and key properties of the resulting system are explored empirically and theoretically. It is found that the algorithm has a hitherto unknown frequency-dependent component, making it ideal for filtering out sensor noise. Weaknesses of the algorithm are also discovered, by mathematically phrasing the signal processing phase as a collection of linear classifiers. It is concluded that traditional machine learning approaches are likely to outperform the implemented system in its current form. However, it is also observed that the algorithm’s inherent filtering characteristics make modification, rather than rejection, the most beneficial course of action. Hybridising the dendritic cell algorithm with more traditional machine learning techniques, through the introduction of a training phase and using a non-linear classification phase is suggested as a possible future direction.
324

Bacterial auto-nemesis : templating polymers for cell sequestration

Magennis, Eugene Peter January 2013 (has links)
The detection and control of microorganisms such as bacteria is important in a wide range of industries and clinical settings. Detection, binding and removal of such pathogenic contaminants can be achieved through judicious consideration of the targets which are available at or in the bacterial cell. Polymers have the ability to present a number of binding ligands for cell targeting on one macromolecule and so avidity of interaction can be greatly increased. The goal of the project was to test whether polymers generated with bacteria in situ would have their composition significantly altered to determine if a templating process was occurring. It was also anticipated that the templated polymers would have better re-binding properties than those produced in the absence of bacteria. A series of chemical functionalities were analysed for their binding properties to bacteria. The functionalities were chosen with consideration to the cell surface characteristics. Further to identification of the most binding and least binding functionalities the polymers were tested for their cytotoxicity against bacteria and human epithelial cells. Concentration ranges were determined which could facilitate bacterial binding and templating yet minimise the lethality of the processes. Templated polymers of the bacteria were generated using a novel method of atom transfer radical polymerisation (ATRP) which we have termed bacterial activated atom transfer radical polymerisation (b-ATRP). This polymerisation method has maximised the potential for templating processes to occur during the polymerisation. Templated polymers differed in both their composition and their binding behaviour to non-templated polymers. The bacterial organic reduction process has also been demonstrated to have greater scope for use within the organic chemistry field as demonstrated by the use of this system to enable in "click-chemistry" via the reduction of copper.
325

Computation with photochromic memory

Chaplin, Jack Christopher January 2013 (has links)
Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation and data storage. This includes attempts to embed computation into biological systems, which could allow the observation and modification of living processes. This thesis explores the storage and computational capabilities of a biocompatible light-sensitive (photochromic) molecular switch (NitroBIPS) that has the potential to be embedded into both natural and synthetic biological systems. To achieve this, NitroBIPS was embedded in a (PDMS) polymer matrix and an optomechanical setup was built in order to expose the sample to optical stimulation and record fluorescent emission. NitroBIPS has two stable forms - one fluorescent and one non-fluorescent - and can be switched between the two via illumination with ultraviolet or visible light. By exposing NitroBIPS samples to specific stimulus pulse sequences and recording the intensity of fluorescence emission, data could be stored in registers and logic gates and circuits implemented. In addition, by moving the area of illumination, sub-regions of the sample could be addressed. This enabled parallel registers, Turing machine tapes and elementary cellular automata to be implemented. It has been demonstrated, therefore, that photochromic molecular memory can be used to implement conventional universal computation in an unconventional manner. Furthermore, because registers, Turing machine tapes, logic gates, logic circuits and elementary cellular automata all utilise the same samples and same hardware, it has been shown that photochromic computational devices can be dynamically repurposed. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes. This includes inhibiting protein binding, perturbing lipid membranes and binding to DNA in a manner that is dependent on the molecule's form. The implementation of universal computation demonstrated in this thesis could, therefore, be used in combination with these biological manipulations as key components within synthetic biology systems or in order to monitor and control natural biological processes.
326

Principled design of evolutionary learning sytems for large scale data mining

Franco Gaviria, María Auxiliadora January 2013 (has links)
Currently, the data mining and machine learning fields are facing new challenges because of the amount of information that is collected and needs processing. Many sophisticated learning approaches cannot simply cope with large and complex domains, because of the unmanageable execution times or the loss of prediction and generality capacities that occurs when the domains become more complex. Therefore, to cope with the volumes of information of the current realworld problems there is a need to push forward the boundaries of sophisticated data mining techniques. This thesis is focused on improving the efficiency of Evolutionary Learning systems in large scale domains. Specifically the objective of this thesis is improving the efficiency of the Bioinformatic Hierarchical Evolutionary Learning (BioHEL) system, a system designed with the purpose of handling large domains. This is a classifier system that uses an Iterative Rule Learning approach to generate a set of rules one by one using consecutive Genetic Algorithms. This system have shown to be very competitive so far in large and complex domains. In particular, BioHEL has obtained very important results when solving protein structure prediction problems and has won related merits, such as being placed among the best algorithms for this purpose at the Critical Assessment of Techniques for Protein Structure Prediction (CASP) in 2008 and 2010, and winning the bronze medal at the HUMIES Awards for Human-competitive results in 2007. However, there is still a need to analyse this system in a principled way to determine how the current mechanisms work together to solve larger domains and determine the aspects of the system that can be improved towards this aim. To fulfil the objective of this thesis, the work is divided in two parts. In the first part of the thesis exhaustive experimentation was carried out to determine ways in which the system could be improved. From this exhaustive analysis three main weaknesses are pointed out: a) the problem-dependancy of parameters in BioHEL's fitness function, which results in having a system difficult to set up and which requires an extensive preliminary experimentation to determine the adequate values for these parameters; b) the execution time of the learning process, which at the moment does not use any parallelisation techniques and depends on the size of the training sets; and c) the lack of global supervision over the generated solutions which comes from the usage of the Iterative Rule Learning paradigm and produces larger rule sets in which there is no guarantee of minimality or maximal generality. The second part of the thesis is focused on tackling each one of the weaknesses abovementioned to have a system capable of handling larger domains. First a heuristic approach to set parameters within BioHEL's fitness function is developed. Second a new parallel evaluation process that runs on General Purpose Graphic Processing Units was developed. Finally, post-processing operators to tackle the generality and cardinality of the generated solutions are proposed. By means of these enhancements we managed to improve the BioHEL system to reduce both the learning and the preliminary experimentation time, increase the generality of the final solutions and make the system more accessible for end-users. Moreover, as the techniques discussed in this thesis can be easily extended to other Evolutionary Learning systems we consider them important additions to the research in this field towards tackling large scale domains.
327

Ensidiga närstående i det generella närståendebegreppet : Problematiken med ensidiga närstående vid tillämpning av reglerna om lönebaserat utrymme / Unilateral Relatives within the General Definition of Relatives  : The Problem of Unilateral Relatives in the Application of the Salary Cap-Rule

Jonsson, Julia January 2017 (has links)
I 2 kap. 22 § inkomstskattelagen (1999:1229, IL) finns det så kallade generella närståendebegrepp, vilket används i åtskilliga sammanhang i lagen. Bestämmelsen är utformad på ett sådant sätt att förhållanden mellan den skattskyldige och vissa av dennes närstående är ensidiga. Med ensidiga närstående avses två personer A och B när A är närstående till B, men B inte räknas som närstående till A enligt det generella närståendebegreppet. Företeelsen har uppmärksammats i olika rättskällor och anses medföra tillämpningsproblem och skillnader mellan beskattningen av personer som är ensidiga närstående. Det finns dock fortfarande områden, i vilka problematiken inte uppmärksammats i samma utsträckning. Ett att dessa områden är reglerna om lönebaserat utrymme i 3:12-regelverket. Syftet med framställningen är att belysa och analysera problematiken avseende förekomsten av ensidiga närstående i det generella närståendebegreppet samt utreda huruvida förekomsten av ensidiga närstående medför tillämpningsproblem och olikbehandling av ensidiga närstående vid tillämpning av de gällande reglerna gentemot de i SOU 2016:75 föreslagna reglerna om lönebaserat utrymme. Skillnader mellan beskattningen av personer som räknas som ensidigt närstående till varandra anses främst uppkomma till följd av att den ena kan utnyttja den andres ställning, lön eller ägarandel i företaget, men inte tvärtom. En hårdare beskattning kan träffa såväl den som har närstående som den som är närstående, beroende på vilken skatteregel som tillämpas. Trots att lagstiftaren har vidtagit vissa åtgärder i syfte att undvika olikbehandlingen av ensidiga närstående, anses de inte vara effektiva i ett större sammanhang. Med tanke på att det generella närståendebegreppet tillämpas i olika regler i IL, finns det ett incitament att utvidga eller inskränka begreppet till att det endast omfattar ömsesidiga närstående. Avseende reglerna om lönebaserat utrymme kan skillnader mellan beskattningen av ensidiga närstående uppstå vid tillämpningen av löneuttagskravet. Detta till följd av att en i det ensidiga förhållandet kan beakta den andres ersättning från företaget, men inte tvärtom. Samma oenighet kan uppstå vid tillämpning av regeln om taket för det lönebaserade utrymmet. Rättsläget avseende denna regel anses dock vara oklart. Vid fördelningen av det lönebaserade utrymmet uppstår däremot inga problem, eftersom närståendebegreppet inte beaktas i denna regel. Enligt ändringsförslagen i SOU 2016:75 föreslås regeln om taket för det lönebaserade utrymmet slopas, vilket innebär att de eventuella skillnaderna mellan beskattningen av ensidiga närstående försvinner. Den nya metoden för beräkningen av det lönebaserade utrymmet anses inte medföra problem för ensidiga närstående, eftersom fördelningen av det lönebaserade utrymmet sker för en närståendekrets enligt 56 kap. 5 § IL. Beträffande löneuttagskravet anses de olikbehandlingar som kan uppstå i dagsläget försvinna i och med förslagen. Löneuttagskravet bli mer kopplat till det lönebaserade utrymmet och anses kunna tillämpas på samma närståendekrets som det lönebaserade utrymmet. Det innebär att samtliga närstående i kretsen kommer att kunna beakta varandras ersättningar från företaget vid bedömningen av huruvida löneuttagskravet är uppfyllt. Huruvida förslagen i SOU 2016:75 kommer att leda till en lagstiftning återstår dock att se.
328

Étude du rôle des xylosyltransférases I et II et de la kinase Fam20B dans la régulation de la biosynthèse des protéoglycanes / Investigation of the role of xylosyltransferases I and II and of the kinase Fam20B in the regulation of proteoglycan synthesis

Shaukat, Irfan 18 November 2015 (has links)
Les protéoglycans (PGs) à héparane- (HS) et chondrol'tine-sulfate (CS) jouent un rôle essentiel dans la régulation de nombreux processus biologiques tels que la différenciation cellulaire, la signalisation, la prolifération et la morphogenèse. En effet, les PGs via leurs chaînes de glycosaminoglycanes (GAGs) agissent comme des récepteurs pour des facteurs de croissance, des enzymes et des protéines d'adhésion cellulaire, modulant ainsi leur biodisponibilité, la formation de gradient et leur activité. La synthèse des chaînes de CS et d'HS est initiée par le transfert d'un résidu xylose sur des sérines de la protéine "core" des PGs par les xylosyltransferases (XT), XT-I et XT-II. Ces enzymes catalysent une étape limitante régulant la biosynthèse des GAGs. En plus de la régulation par les XT, la synthèse des GAGs peut être régulée par la phosphorylation du xylose en position 2-0 par la kinase Fam20B. Il a été montré que la XT-I et la XT-II sont capables de restaurer la synthèse des PGs dans les cellules déficientes en activité xylosyltransférase, suggérant ainsi qu'elles sont fonctionnellement redondantes. Cependant, les rôles spécifiques de la XT­ I et de la XT-II et l'impact de leurs mutations génétiques sur la synthèse des CS et des HS ne sont pas connus. Au cours de cette thèse, nous avons montré que la XT-I initie la synthèse des PGs avec des chaînes de CS et d'HS de tailles plus longues que celle initiée par la XT-II et avons démontré que cela est lié à leurs localisations subcellulaires respectives. En outre, nous avons montré d'une part que les mutations génétiques de la XT-I réduisent fortement la capacité de l'enzyme à initier la synthèse des GAGs et d'autre part que deux mutations de la XT-II conduisent à la mislocalisation de l'enzyme et l'abrogation de sa capacité à initier la synthèse des chaînes de CS et d'HS. En outre, nous avons démontré en utilisant différentes lignées cellulaires et des mutants inactifs que la kinase Fam20B régule négativement le processus de synthèse des chaînes de GAGs et par conséquent que la phosphorylation du résidu xylose par Fam20B entraîne un blocage dans la polymérisation des chaînes de GAGs / Heparan- (HS) and chondroitin-sulfate (CS) proteoglycans (PGs) are essential regulators of many biological processes including cell differentiation, signalization, proliferation and morphogenesis. Indeed, PGs act through their glycosaminoglycan (GAG) chains as receptors for growth factors, enzymes and cell adhesion proteins, thereby modulating their bioavailability, gradient formation and biological activity. The assembly of HS and CS GAG chains is initiated by the transfer of xylose to serine residues of PG core protein by the xylosyltransferases (XT) enzymes, XT-I and XT-II. These enzymes catalyze a rate-limiting step in the biosynthesis pathway and therefore considered as a regulating factors in the GAG biosynthesis process. Beside the regulation by XT enzymes, GAG chain synthesis may also be regulated by phosphorylation of the xylose residue at 2-0 position by the kinase Fam20B. Ithas been shown that XT-I and XT-II are able to restore GAG-attached PG synthesis in xylosyltransferase-deficient cells, suggesting that they are functionally redundant. However, nothing is known of the specific roles of XT-I and XT-II ifany and of the impact of XT-I and XT-II mutations on the synthesis of CS- and HS-PG. Here, we showed that XT-I initiates PGs with large size CS- and HS-GAG chains compared to XT-II and demonstrated that this was linked to their subcellular localisation. In addition, we have addressed the question of whether genetic mutations of XT-I and XT-II associated with various diseases impact CS- and HS-PG synthesis and found that mutations in XT-I strongly reduced the capacity of the enzyme to initiate the synthesis of both CS and HS GAG chains. However, two mutations in XT-II abrogated the capacity of the enzyme to initiate CS and HS GAGs and led to the mislocalisation of the enzyme. Furthermore, we demonstrated using variouse cell lines and dead mutants that Fam20B negatively regulates GAG synthesis process and that phosphorylation of xylose residue by this kinase resulted in a blokage of the polymerisation procees of the GAG chain
329

Analyse et modélisation des performances d'un nouveau type de détecteur en médecine nucléaire : du détecteur Anger au détecteur semi-conducteur / Analysis and modelling of the performance of a new solid-state detector in nuclear medicine : from Anger- to Semiconductor-detectors

Imbert, Laëtitia 10 December 2012 (has links)
La tomoscintigraphie myocardique est considérée comme un examen de référence pour le diagnostic et l'évaluation de la maladie coronarienne. Mise au point dans les années 1980, cette technique est en pleine mutation depuis l'arrivée de nouvelles caméras à semi-conducteurs. Deux caméras à semi-conducteurs, dédiées à la cardiologie nucléaire et utilisant des détecteurs de Cadmium Zinc Telluride sont actuellement commercialisées : la Discovery NM-530c (General Electric) et la DSPECT (Spectrum Dynamics). Les performances de ces caméras CZT ont été évaluées : 1) à la fois sur fantôme et sur des examens d'effort provenant de patients à faible probabilité de maladie coronaire, et 2) avec les paramètres d'acquisition et de reconstruction utilisés en clinique. Les résultats ont démontré la nette supériorité des caméras CZT en termes de sensibilité de détection, de résolution spatiale et de rapport contraste sur bruit par rapport à la génération de caméras d'Anger. Ces propriétés vont permettre de diminuer très fortement les temps d'acquisition et les activités injectées, tout en améliorant la qualité des images. Néanmoins, on connaît encore mal les limites et possibles artéfacts liés à la géométrie particulière d'acquisition. C'est pourquoi nous avons développé, avec la plateforme de simulations Monte Carlo GATE, un simulateur numérique spécifique de la caméra DSPECT. Nous avons pu ensuite le valider en comparant des données effectivement enregistrées aux données simulées. Ce simulateur pourrait aider à optimiser les protocoles de reconstruction et d'acquisition, en particulier les protocoles les plus complexes (acquisitions double traceur, études cinétiques) / Myocardial single-photon emission computed tomography (SPECT) is considered as the gold standard for the diagnosis of coronary artery disease. Developed in the 1980s with rotating Anger gamma-cameras, this technique could be dramatically enhanced by new imaging systems working with semi-conductor detectors. Two semiconductor cameras, dedicated to nuclear cardiology and equipped with Cadmium Zinc Telluride detectors, have been recently commercialized: the Discovery NM- 530c (General Electric) and the DSPECT (Spectrum Dynamics). The performances of these CZT cameras were compared: 1) by a comprehensive analysis of phantom and human SPECT images considered as normal and 2) with the parameters commonly recommended for SPECT recording and reconstruction. The results show the superiority of the CZT cameras in terms of detection sensitivity, spatial resolution and contrast-to-noise ratio, compared to conventional Anger cameras. These properties might lead to dramatically reduce acquisition times and/or the injected activities. However, the limits of these new CZT cameras, as well as the mechanism of certain artefacts, remain poorly known. That?s why we developed, with the GATE Monte Carlo simulation plateform, a specific simulator of the DSPECT camera. We validated this simulator by comparing actually recorded data with simulated data. This simulator may yet be used to optimize the recorded and reconstruction processes, especially for complex protocols such as simultaneous dual-radionuclide acquisition and kinetics first-pass studies
330

Part-based tracking with cascaded regression of neighbours

Wang, Xiaomeng January 2018 (has links)
Visual tracking aims to detect the location of a possibly moving target by extracting local appearance features and matching them between consecutive images to obtain accurate estimates of target location. Tracking of generic objects is one of the most active topics in computer vision. Despite the large body of work addressing this problem, robust visual tracking of generic objects is still a challenging problem, as the performance of a visual tracking algorithm is affected by many factors, such as non-rigid object deformation, partial or full occlusion of the target, illumination variation, scale variation, etc. Especially, many objects in the real world have a complex appearance and articulated structure. The combination of rigid motion and non-rigid object deformation results in complex appearance changes, making general object tracking a particularly hard problem. Recently, part-based trackers are preferred in tracking with occlusion and non-rigid deformation because part-based models, which represent the target as a connected set of components, each describing a section of the object, can provide more flexible and robust object appearance models. However, there are four main problems with current part-based trackers: 1) current part-based trackers rely on a response map estimating the likelihood that any given location in an image represents the target (part); 2) the spatial information utilised by current part-based models is limited and inflexible; 3) there is no way of jointly learning shape and appearance for current part-based trackers; 4) a more complex motion model is required, with parts' motion having separate factors. To address these four problems, this thesis proposes a novel approach to part-based tracking by replacing local matching of an appearance model by direct prediction of the displacement between local image patches and part locations. This thesis proposes to use cascaded regression (SDM) with incremental learning on deeply learned features to track generic objects without any prior knowledge of an object's structure or appearance. This thesis exploits the spatial constraints between individual parts and those between parts and the object as a whole by implicitly learning the shape and deformation parameters of the object in an online fashion. A multiple temporal scale motion model is integrated to initialise the cascaded regression search close to the target and to allow it to cope with occlusions. Experimental results clearly demonstrate the value of the method's components, and comparison with the state-of-the-art techniques in the CVPR 2013 Visual Tracker Benchmark shows that the proposed TRIC-track tracker ranks first on the full dataset. To address the problems of low efficiency and limited samples in SDM in TRIC-track, this thesis introduces Continuous Regression to model-free visual tracking. It is found that the Taylor expansion is not able to accurately approximate image features of sample space with a high variance in visual tracking. This problem is alleviated by introducing Locally Continuous Regression strategy, proposed in this thesis. It unifies sampling based regression with Continuous Regression in an efficient manner by running Continuous Regression on a few sample locations spread around the target, and relating those sampled locations to each other. Locally Continuous Regression is integrated into the main framework of TRIC-track and shows six times computational cost improvement without sacrificing the performance, compared to its sampling-based counterpart.

Page generated in 0.0449 seconds