Spelling suggestions: "subject:"semiautomatic"" "subject:"biautomatic""
31 |
Oorsake van leermislukking in die Junior Primêre fase van skole in die Windhoek stadsgebiedCloete, Hendrika 01 January 2002 (has links)
Thesis in Afrikaans with summaries in Afrikaans and English / Die doel van hlerdie ondersoek is om die ekstrinsieke en intrinsieke oorsake van
leermislukking met spesifieke verwysing na skole in die Windhoek stadsgebied te bepaal
en om aanbevelings te doen hoe om leermislukking teen te werk.
Leermislukking en druiping is onrusbarend hoog in skole in die Windhoek stadsgebied.
Nadat die Ministerie van Onderwys die semi-outomatiese promoveringstelsel ingestel
het, het druiping afgeneem maar leermislukking het toegeneem omdat leerders
gepromoveer word sonder dat hulle sukses in die vorige graad behaal het. Volgens die
literatuurstudie le die oorsake van leermislukking by die ouerhuis, die skool, die
omgewing en in die leerder self.
Die empiriese ondersoek toon ooreenkomste met die literatuurstudie wat betref die
oorsake van leermislukking. Om die oorsake teen te werk
- moet onderwysers beter opgelei word
- moet die ouers meer bewus gemaak word van hulle rol in die leersukses van hulle
kinders
- is meer skoolgeboue en onderwysers nodig om kleiner klasse te bewerkstellig. / The purpose of this research is to establish the extrinsic and intrinsic causes of learning
failure with specific reference to schools in the Windhoek municipal area and to make
recommendations to counteract learning failure.
Learning failure and grade failing are disconcertingly high in schools in the Windhoek
municipal area. After the Ministry of Education implemented the semi-automatic
promotion system, grade failing decreased, but learning failure increased because
learners are promoted without achieving success in a prior grade. According to the
literature study, the causes of learning failure are to be found in the home, the school,
the environment, and the learner.
Similarities regarding the causes of learning failure were found in the empirical
investigation and the literature study. To counteract these causes
• teachers should be better trained
• parents should become more aware of their role in the learning success of their
children
• more school buildings and teachers are needed to limit class sizes.
|
32 |
Segmentace medicínských obrazových dat / Medical Image SegmentationLipták, Juraj January 2013 (has links)
This thesis deals with a graph cut approach for segmentation of the anatomical structures in volumetric medical images. The method used requires some voxels to be a priori identified as object or background seeds. The goal of this thesis is implementation of the graph cut method and construction of an interactive tool for segmentation. Selected metod's behaviour is examined on two datasets with manually-guided segmentation results. Testing is in one case focused on the influence of method parameters on segmentation results, while in the other deals with method tolerance towards various signal-to-noise and contrast-to-noise ratios on input. To assess the consistency of a given segmentation with the ground-truth the F-measure is used.
|
33 |
Konstrukce jednoúčelového montážního zařízení pro automobilní průmysl / Design of single purpose assembly device for automotive industryDenk, Marek January 2020 (has links)
The subject of this master’s thesis is a design of a single-purpose assembly machine for automotive industry. The assembled component is a part of a headlight of a passenger car consisting of a heatsink, a PCB and a reflector which are mutually connected by screws. The result of the thesis is detailed 3D model of the single-purpose machine made in Creo Parametric software and drawing documentation of the designed machine.
|
34 |
Poloautomatická segmentace obrazu / Semi-Automatic Image SegmentationHorák, Jan January 2015 (has links)
This work describes design and implementation of a tool for creating photomontages. The tool is based on methods of semi-automatic image segmentation. Work outlines problems of segmentation of image data and benefits of interaction with the user. It analyzes different approaches to interactive image segmentation, explains their principles and shows their positive and negative aspects. It also presents advantages and disadvantages of currently used photo-editing applications. Proposes application for creating photomontages which consists of two steps: Extraction of an object from picture and insertion of it into another picture. The first step uses the method of semi-automatic segmentation GrabCut based on the graph theory. The work also includes comparison between application and other applications in which it is possible to create a photomontage, and application tests done by users.
|
35 |
Improving Semi-Automated Segmentation Using Self-Supervised LearningBlomlöf, Alexander January 2024 (has links)
DeepPaint is a semi-automated segmentation tool that utilises a U-net architecture to performbinary segmentation. To maximise the model’s performance and minimise user time, it isadvisable to apply Transfer Learning (TL) and reuse a model trained on a similar segmentationtask. However, due to the sensitivity of medical data and the unique properties of certainsegmentation tasks, TL is not feasible for some applications. In such circumstances, SelfSupervised Learning (SSL) emerges as the most viable option to minimise the time spent inDeepPaint by a user. Various pretext tasks, exploring both corruption segmentation and corruption restoration, usingsuperpixels and square patches, were designed and evaluated. With a limited number ofiterations in both the pretext and downstream tasks, significant improvements across fourdifferent datasets were observed. The results reveal that SSL models, particularly those pretrained on corruption segmentation tasks where square patches were corrupted, consistentlyoutperformed models without pre-training, with regards to a cumulative Dice SimilarityCoefficient (DSC). To examine whether a model could learn relevant features from a pretext task, Centred KernelAlignment (CKA) was used to measure the similarity of feature spaces across a model's layersbefore and after fine-tuning on the downstream task. Surprisingly, no significant positivecorrelation between downstream DSC and CKA was observed in the encoder, likely due to thelimited fine-tuning allowed. Furthermore, it was examined whether pre-training on the entiredataset, as opposed to only the training subset, yielded different downstream results. Asexpected, significantly higher DSC in the downstream task is more likely if the model hadaccess to all data during the pretext task. The differences in downstream segmentationperformance between models that accessed different data subsets during pre-training variedacross datasets.
|
36 |
Semi-Automatic Analysis and Visualization of Cardiac 4D Flow CTvan Oosten, Anthony January 2022 (has links)
The data obtained from computational fluid dynamics (CFD) simulations of blood flow in the heart is plentiful, and processing this data takes time and the procedure for that is not straightforward. This project aims to develop a tool that can semi-automatically process CFD simulation data, which is based on 4D flow computed tomography (CT) data, with minimal user input. The tool should be able to time efficiently calculate flow parameters from the data, and automatically create overview images of the flow field while doing so, to aid the user's analysis process. The tool is coded using Python programming language, and the Python scripts are inputted to the application ParaView for processing of the simulation data. The tool generates 3 chamber views of the heart by calculating three points from the given patient data, which represent the aortic and mitral valves, and the apex of the heart. A plane is generated that pass through these three points, and the heart is sliced along this plane to visualize 3 chambers of the heart. The camera position is also manipulated to optimize the 3 chamber view. The maximum outflow velocity over the cardiac cycle in the left atrial appendage (LAA) is determined by searching in a time range around the maximum outflow rate of the LAA in a cardiac cycle, and finding the highest velocity value that points away from the LAA in this range. The flow component analysis is calculated in the LAA and left ventricle (LV) by seeding particles in each at the start of the cardiac cycle, and tracking these particles forwards and backwards in time to determine where the particles end up and come from, respectively. By knowing these two aspects, the four different flow components of the blood can be determined in both the LAA and LV. The tool can successfully create 3 chamber views of the heart model from three semi-automatically determined points, at a manipulated camera location. It can also calculate the maximum outflow velocity of the flow field over a cardiac cycle in the LAA, and perform a flow component analysis of the LAA and the LV by tracking particles forwards and backwards in time through a cardiac cycle. The maximum velocity calculation is relatively time efficient and produces results similar to those found manually, yet the output is dependent on the user-defined inputs and processing techniques, and varies between users. The flow component analysis is also time efficient, produces results for the LV that are comparable to pre-existing research, and produces results for the LAA that are comparable to the LVs' results. Although, the extraction process of the LAA sometimes includes part of the left atrium, which impacts the accuracy of the results. After processing each part, the tool creates a single file containing each part's main results for easier analysis of the patient data. In conclusion, the tool is capable of semi-automatically processing CFD simulation data which saves the user time, and it has thus met all the project aims
|
37 |
A New Segmentation Algorithm for Prostate Boundary Detection in 2D Ultrasound ImagesChiu, Bernard January 2003 (has links)
Prostate segmentation is a required step in determining the volume of a prostate, which is very important in the diagnosis and the treatment of prostate cancer. In the past, radiologists manually segment the two-dimensional cross-sectional ultrasound images. Typically, it is necessary for them to outline at least a hundred of cross-sectional images in order to get an accurate estimate of the prostate's volume. This approach is very time-consuming. To be more efficient in accomplishing this task, an automated procedure has to be developed. However, because of the quality of the ultrasound image, it is very difficult to develop a computerized method for defining boundary of an object in an ultrasound image.
The goal of this thesis is to find an automated segmentation algorithm for detecting the boundary of the prostate in ultrasound images. As the first step in this endeavour, a semi-automatic segmentation method is designed. This method is only semi-automatic because it requires the user to enter four initialization points, which are the data required in defining the initial contour. The discrete dynamic contour (DDC) algorithm is then used to automatically update the contour. The DDC model is made up of a set of connected vertices. When provided with an energy field that describes the features of the ultrasound image, the model automatically adjusts the vertices of the contour to attain a maximum energy. In the proposed algorithm, Mallat's dyadic wavelet transform is used to determine the energy field. Using the dyadic wavelet transform, approximate coefficients and detailed coefficients at different scales can be generated. In particular, the two sets of detailed coefficients represent the gradient of the smoothed ultrasound image. Since the gradient modulus is high at the locations where edge features appear, it is assigned to be the energy field used to drive the DDC model.
The ultimate goal of this work is to develop a fully-automatic segmentation algorithm. Since only the initialization stage requires human supervision in the proposed semi-automatic initialization algorithm, the task of developing a fully-automatic segmentation algorithm is reduced to designing a fully-automatic initialization process. Such a process is introduced in this thesis.
In this work, the contours defined by the semi-automatic and the fully-automatic segmentation algorithm are compared with the boundary outlined by an expert observer. Tested using 8 sample images, the mean absolute difference between the semi-automatically defined and the manually outlined boundary is less than 2. 5 pixels, and that between the fully-automatically defined and the manually outlined boundary is less than 4 pixels. Automated segmentation tools that achieve this level of accuracy would be very useful in assisting radiologists to accomplish the task of segmenting prostate boundary much more efficiently.
|
38 |
Hierarchical reinforcement learning for spoken dialogue systemsCuayáhuitl, Heriberto January 2009 (has links)
This thesis focuses on the problem of scalable optimization of dialogue behaviour in speech-based conversational systems using reinforcement learning. Most previous investigations in dialogue strategy learning have proposed flat reinforcement learning methods, which are more suitable for small-scale spoken dialogue systems. This research formulates the problem in terms of Semi-Markov Decision Processes (SMDPs), and proposes two hierarchical reinforcement learning methods to optimize sub-dialogues rather than full dialogues. The first method uses a hierarchy of SMDPs, where every SMDP ignores irrelevant state variables and actions in order to optimize a sub-dialogue. The second method extends the first one by constraining every SMDP in the hierarchy with prior expert knowledge. The latter method proposes a learning algorithm called 'HAM+HSMQ-Learning', which combines two existing algorithms in the literature of hierarchical reinforcement learning. Whilst the first method generates fully-learnt behaviour, the second one generates semi-learnt behaviour. In addition, this research proposes a heuristic dialogue simulation environment for automatic dialogue strategy learning. Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: First, both methods scale well at the cost of near-optimal solutions, resulting in slightly longer dialogues than the optimal solutions. Second, dialogue strategies learnt with coherent user behaviour and conservative recognition error rates can outperform a reasonable hand-coded strategy. Third, semi-learnt dialogue behaviours are a better alternative (because of their higher overall performance) than hand-coded or fully-learnt dialogue behaviours. Last, hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of adaptive behaviours in larger-scale spoken dialogue systems. This research makes the following contributions to spoken dialogue systems which learn their dialogue behaviour. First, the Semi-Markov Decision Process (SMDP) model was proposed to learn spoken dialogue strategies in a scalable way. Second, the concept of 'partially specified dialogue strategies' was proposed for integrating simultaneously hand-coded and learnt spoken dialogue behaviours into a single learning framework. Third, an evaluation with real users of hierarchical reinforcement learning dialogue agents was essential to validate their effectiveness in a realistic environment.
|
39 |
Étude de l’évolution dans la terminologie de l’informatique en anglais avant et après 2006Lafrance, Angélique 09 1900 (has links)
Dans la présente étude, nous proposons une méthode pour observer les changements lexicaux (néologie et nécrologie) en anglais dans le domaine de l’informatique en diachronie courte. Comme l’informatique évolue rapidement, nous croyons qu’une approche en diachronie courte (sur une période de 10 ans) se prête bien à l’étude de la terminologie de ce domaine.
Pour ce faire, nous avons construit un corpus anglais constitué d’articles de revues d’informatique grand public, PC Magazine et PC World, couvrant les années 2001 à 2010. Le corpus a été divisé en deux sous-corpus : 2001-2005 et 2006-2010. Nous avons choisi l'année 2006 comme pivot, car c’est depuis cette année-là que Facebook (le réseau social le plus populaire) est ouvert au public, et nous croyions que cela donnerait lieu à des changements rapides dans la terminologie de l’informatique. Pour chacune des deux revues, nous avons sélectionné un numéro par année de 2001 à 2010, pour un total d’environ 540 000 mots pour le sous-corpus de 2001 à 2005 et environ 390 000 mots pour le sous-corpus de 2006 à 2010. Chaque sous-corpus a été soumis à l’extracteur de termes TermoStat pour en extraire les candidats-termes nominaux, verbaux et adjectivaux. Nous avons procédé à trois groupes d’expérimentations, selon le corpus de référence utilisé. Dans le premier groupe d’expérimentations (Exp1), nous avons comparé chaque sous-corpus au corpus de référence par défaut de TermoStat pour l’anglais, un extrait du British National Corpus (BNC). Dans le deuxième groupe d’expérimentations (Exp2), nous avons comparé chacun des sous-corpus à l’ensemble du corpus informatique que nous avons créé. Dans le troisième groupe d’expérimentations (Exp3), nous avons comparé chacun des sous-corpus entre eux.
Après avoir nettoyé les listes de candidats-termes ainsi obtenues pour ne retenir que les termes du domaine de l’informatique, et généré des données sur la variation de la fréquence et de la spécificité relative des termes entre les sous-corpus, nous avons procédé à la validation de la nouveauté et de l’obsolescence des premiers termes de chaque liste pour déterminer si la méthode proposée fonctionne mieux avec un type de changement lexical (nouveauté ou obsolescence), une partie du discours (termes nominaux, termes verbaux et termes adjectivaux) ou un groupe d’expérimentations.
Les résultats de la validation montrent que la méthode semble mieux convenir à l’extraction des néologismes qu’à l’extraction des nécrologismes. De plus, nous avons obtenu de meilleurs résultats pour les termes nominaux et adjectivaux que pour les termes verbaux. Enfin, nous avons obtenu beaucoup plus de résultats avec l’Exp1 qu’avec l’Exp2 et l’Exp3. / In this study, we propose a method to observe lexical changes (neology and necrology) in English in the field of computer science in short-period diachrony. Since computer science evolves quickly, we believe that a short-period diachronic approach (over a period of 10 years) lends itself to studying the terminology of that field.
For this purpose, we built a corpus in English with articles taken from computer science magazines for the general public, PC Magazine and PC World, covering the years 2001 to 2010. The corpus was divided into two subcorpora: 2001-2005 and 2006-2010. We chose year 2006 as a pivot, because Facebook (the most popular social network) has been open to the public since that year, and we believed that would cause quick changes in computer science terminology. For each of the magazines, we selected one issue per year from 2001 to 2010, for a total of about 540,000 words for the 2001-2005 subcorpus and about 390,000 words for the 2006-2010 subcorpus. Each subcorpus was submitted to term extractor TermoStat to extract nominal, verbal and adjectival term candidates. We proceeded to three experiment groups, according to the reference corpus used. In the first experiment group (Exp1), we compared each subcorpus to the default reference corpus in TermoStat for English, a British National Corpus (BNC) extract. In the second experiment group (Exp2), we compared each subcorpus to the whole computer science corpus we created. In the third experiment group (Exp3), we compared the two subcorpora with each other.
After cleaning up the term candidates lists thus obtained to retain only the terms in the field of computer science, and generating data about relative frequency and relative specificity of the terms between subcorpora, we proceeded to the validation of novelty and obsolescence of the first terms of each list to determine whether the proposed method works better with a particular type of lexical change (novelty or obsolescence), part of speech (nominal, verbal or adjectival term), or experiment group.
The validation results show that the method seems to work better with neology extraction than with necrology extraction. Also, we had better results with nominal and adjectival terms than with verbal terms. Finally, we had much more results with Exp1 than with Exp2 and Exp3.
|
40 |
Étude sur l'équivalence de termes extraits automatiquement d'un corpus parallèle : contribution à l'extraction terminologique bilingueLe Serrec, Annaïch January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
Page generated in 0.0851 seconds