• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Oorsake van leermislukking in die Junior Primêre fase van skole in die Windhoek stadsgebied

Cloete, Hendrika 01 January 2002 (has links)
Thesis in Afrikaans with summaries in Afrikaans and English / Die doel van hlerdie ondersoek is om die ekstrinsieke en intrinsieke oorsake van leermislukking met spesifieke verwysing na skole in die Windhoek stadsgebied te bepaal en om aanbevelings te doen hoe om leermislukking teen te werk. Leermislukking en druiping is onrusbarend hoog in skole in die Windhoek stadsgebied. Nadat die Ministerie van Onderwys die semi-outomatiese promoveringstelsel ingestel het, het druiping afgeneem maar leermislukking het toegeneem omdat leerders gepromoveer word sonder dat hulle sukses in die vorige graad behaal het. Volgens die literatuurstudie le die oorsake van leermislukking by die ouerhuis, die skool, die omgewing en in die leerder self. Die empiriese ondersoek toon ooreenkomste met die literatuurstudie wat betref die oorsake van leermislukking. Om die oorsake teen te werk - moet onderwysers beter opgelei word - moet die ouers meer bewus gemaak word van hulle rol in die leersukses van hulle kinders - is meer skoolgeboue en onderwysers nodig om kleiner klasse te bewerkstellig. / The purpose of this research is to establish the extrinsic and intrinsic causes of learning failure with specific reference to schools in the Windhoek municipal area and to make recommendations to counteract learning failure. Learning failure and grade failing are disconcertingly high in schools in the Windhoek municipal area. After the Ministry of Education implemented the semi-automatic promotion system, grade failing decreased, but learning failure increased because learners are promoted without achieving success in a prior grade. According to the literature study, the causes of learning failure are to be found in the home, the school, the environment, and the learner. Similarities regarding the causes of learning failure were found in the empirical investigation and the literature study. To counteract these causes • teachers should be better trained • parents should become more aware of their role in the learning success of their children • more school buildings and teachers are needed to limit class sizes.
32

Segmentace medicínských obrazových dat / Medical Image Segmentation

Lipták, Juraj January 2013 (has links)
This thesis deals with a graph cut approach for segmentation of the anatomical structures in volumetric medical images. The method used requires some voxels to be a priori identified as object or background seeds. The goal of this thesis is implementation of the graph cut method and construction of an interactive tool for segmentation. Selected metod's behaviour is examined on two datasets with manually-guided segmentation results. Testing is in one case focused on the influence of method parameters on segmentation results, while in the other deals with method tolerance towards various signal-to-noise and contrast-to-noise ratios on input. To assess the consistency of a given segmentation with the ground-truth the F-measure is used.
33

Konstrukce jednoúčelového montážního zařízení pro automobilní průmysl / Design of single purpose assembly device for automotive industry

Denk, Marek January 2020 (has links)
The subject of this master’s thesis is a design of a single-purpose assembly machine for automotive industry. The assembled component is a part of a headlight of a passenger car consisting of a heatsink, a PCB and a reflector which are mutually connected by screws. The result of the thesis is detailed 3D model of the single-purpose machine made in Creo Parametric software and drawing documentation of the designed machine.
34

Poloautomatická segmentace obrazu / Semi-Automatic Image Segmentation

Horák, Jan January 2015 (has links)
This work describes design and implementation of a tool for creating photomontages. The tool is based on methods of semi-automatic image segmentation. Work outlines problems of segmentation of image data and benefits of interaction with the user. It analyzes different approaches to interactive image segmentation, explains their principles and shows their positive and negative aspects. It also presents advantages and disadvantages of currently used photo-editing applications. Proposes application for creating photomontages which consists of two steps: Extraction of an object from picture and insertion of it into another picture. The first step uses the method of semi-automatic segmentation GrabCut based on the graph theory. The work also includes comparison between application and other applications in which it is possible to create a photomontage, and application tests done by users.
35

Semi-Automatic Analysis and Visualization of Cardiac 4D Flow CT

van Oosten, Anthony January 2022 (has links)
The data obtained from computational fluid dynamics (CFD) simulations of blood flow in the heart is plentiful, and processing this data takes time and the procedure for that is not straightforward. This project aims to develop a tool that can semi-automatically process CFD simulation data, which is based on 4D flow computed tomography (CT) data, with minimal user input. The tool should be able to time efficiently calculate flow parameters from the data, and automatically create overview images of the flow field while doing so, to aid the user's analysis process. The tool is coded using Python programming language, and the Python scripts are inputted to the application ParaView for processing of the simulation data.  The tool generates 3 chamber views of the heart by calculating three points from the given patient data, which represent the aortic and mitral valves, and the apex of the heart. A plane is generated that pass through these three points, and the heart is sliced along this plane to visualize 3 chambers of the heart. The camera position is also manipulated to optimize the 3 chamber view. The maximum outflow velocity over the cardiac cycle in the left atrial appendage (LAA) is determined by searching in a time range around the maximum outflow rate of the LAA in a cardiac cycle, and finding the highest velocity value that points away from the LAA in this range. The flow component analysis is calculated in the LAA and left ventricle (LV) by seeding particles in each at the start of the cardiac cycle, and tracking these particles forwards and backwards in time to determine where the particles end up and come from, respectively. By knowing these two aspects, the four different flow components of the blood can be determined in both the LAA and LV.  The tool can successfully create 3 chamber views of the heart model from three semi-automatically determined points, at a manipulated camera location. It can also calculate the maximum outflow velocity of the flow field over a cardiac cycle in the LAA, and perform a flow component analysis of the LAA and the LV by tracking particles forwards and backwards in time through a cardiac cycle. The maximum velocity calculation is relatively time efficient and produces results similar to those found manually, yet the output is dependent on the user-defined inputs and processing techniques, and varies between users. The flow component analysis is also time efficient, produces results for the LV that are comparable to pre-existing research, and produces results for the LAA that are comparable to the LVs' results. Although, the extraction process of the LAA sometimes includes part of the left atrium, which impacts the accuracy of the results. After processing each part, the tool creates a single file containing each part's main results for easier analysis of the patient data. In conclusion, the tool is capable of semi-automatically processing CFD simulation data which saves the user time, and it has thus met all the project aims
36

A New Segmentation Algorithm for Prostate Boundary Detection in 2D Ultrasound Images

Chiu, Bernard January 2003 (has links)
Prostate segmentation is a required step in determining the volume of a prostate, which is very important in the diagnosis and the treatment of prostate cancer. In the past, radiologists manually segment the two-dimensional cross-sectional ultrasound images. Typically, it is necessary for them to outline at least a hundred of cross-sectional images in order to get an accurate estimate of the prostate's volume. This approach is very time-consuming. To be more efficient in accomplishing this task, an automated procedure has to be developed. However, because of the quality of the ultrasound image, it is very difficult to develop a computerized method for defining boundary of an object in an ultrasound image. The goal of this thesis is to find an automated segmentation algorithm for detecting the boundary of the prostate in ultrasound images. As the first step in this endeavour, a semi-automatic segmentation method is designed. This method is only semi-automatic because it requires the user to enter four initialization points, which are the data required in defining the initial contour. The discrete dynamic contour (DDC) algorithm is then used to automatically update the contour. The DDC model is made up of a set of connected vertices. When provided with an energy field that describes the features of the ultrasound image, the model automatically adjusts the vertices of the contour to attain a maximum energy. In the proposed algorithm, Mallat's dyadic wavelet transform is used to determine the energy field. Using the dyadic wavelet transform, approximate coefficients and detailed coefficients at different scales can be generated. In particular, the two sets of detailed coefficients represent the gradient of the smoothed ultrasound image. Since the gradient modulus is high at the locations where edge features appear, it is assigned to be the energy field used to drive the DDC model. The ultimate goal of this work is to develop a fully-automatic segmentation algorithm. Since only the initialization stage requires human supervision in the proposed semi-automatic initialization algorithm, the task of developing a fully-automatic segmentation algorithm is reduced to designing a fully-automatic initialization process. Such a process is introduced in this thesis. In this work, the contours defined by the semi-automatic and the fully-automatic segmentation algorithm are compared with the boundary outlined by an expert observer. Tested using 8 sample images, the mean absolute difference between the semi-automatically defined and the manually outlined boundary is less than 2. 5 pixels, and that between the fully-automatically defined and the manually outlined boundary is less than 4 pixels. Automated segmentation tools that achieve this level of accuracy would be very useful in assisting radiologists to accomplish the task of segmenting prostate boundary much more efficiently.
37

Hierarchical reinforcement learning for spoken dialogue systems

Cuayáhuitl, Heriberto January 2009 (has links)
This thesis focuses on the problem of scalable optimization of dialogue behaviour in speech-based conversational systems using reinforcement learning. Most previous investigations in dialogue strategy learning have proposed flat reinforcement learning methods, which are more suitable for small-scale spoken dialogue systems. This research formulates the problem in terms of Semi-Markov Decision Processes (SMDPs), and proposes two hierarchical reinforcement learning methods to optimize sub-dialogues rather than full dialogues. The first method uses a hierarchy of SMDPs, where every SMDP ignores irrelevant state variables and actions in order to optimize a sub-dialogue. The second method extends the first one by constraining every SMDP in the hierarchy with prior expert knowledge. The latter method proposes a learning algorithm called 'HAM+HSMQ-Learning', which combines two existing algorithms in the literature of hierarchical reinforcement learning. Whilst the first method generates fully-learnt behaviour, the second one generates semi-learnt behaviour. In addition, this research proposes a heuristic dialogue simulation environment for automatic dialogue strategy learning. Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: First, both methods scale well at the cost of near-optimal solutions, resulting in slightly longer dialogues than the optimal solutions. Second, dialogue strategies learnt with coherent user behaviour and conservative recognition error rates can outperform a reasonable hand-coded strategy. Third, semi-learnt dialogue behaviours are a better alternative (because of their higher overall performance) than hand-coded or fully-learnt dialogue behaviours. Last, hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of adaptive behaviours in larger-scale spoken dialogue systems. This research makes the following contributions to spoken dialogue systems which learn their dialogue behaviour. First, the Semi-Markov Decision Process (SMDP) model was proposed to learn spoken dialogue strategies in a scalable way. Second, the concept of 'partially specified dialogue strategies' was proposed for integrating simultaneously hand-coded and learnt spoken dialogue behaviours into a single learning framework. Third, an evaluation with real users of hierarchical reinforcement learning dialogue agents was essential to validate their effectiveness in a realistic environment.
38

Étude de l’évolution dans la terminologie de l’informatique en anglais avant et après 2006

Lafrance, Angélique 09 1900 (has links)
Dans la présente étude, nous proposons une méthode pour observer les changements lexicaux (néologie et nécrologie) en anglais dans le domaine de l’informatique en diachronie courte. Comme l’informatique évolue rapidement, nous croyons qu’une approche en diachronie courte (sur une période de 10 ans) se prête bien à l’étude de la terminologie de ce domaine. Pour ce faire, nous avons construit un corpus anglais constitué d’articles de revues d’informatique grand public, PC Magazine et PC World, couvrant les années 2001 à 2010. Le corpus a été divisé en deux sous-corpus : 2001-2005 et 2006-2010. Nous avons choisi l'année 2006 comme pivot, car c’est depuis cette année-là que Facebook (le réseau social le plus populaire) est ouvert au public, et nous croyions que cela donnerait lieu à des changements rapides dans la terminologie de l’informatique. Pour chacune des deux revues, nous avons sélectionné un numéro par année de 2001 à 2010, pour un total d’environ 540 000 mots pour le sous-corpus de 2001 à 2005 et environ 390 000 mots pour le sous-corpus de 2006 à 2010. Chaque sous-corpus a été soumis à l’extracteur de termes TermoStat pour en extraire les candidats-termes nominaux, verbaux et adjectivaux. Nous avons procédé à trois groupes d’expérimentations, selon le corpus de référence utilisé. Dans le premier groupe d’expérimentations (Exp1), nous avons comparé chaque sous-corpus au corpus de référence par défaut de TermoStat pour l’anglais, un extrait du British National Corpus (BNC). Dans le deuxième groupe d’expérimentations (Exp2), nous avons comparé chacun des sous-corpus à l’ensemble du corpus informatique que nous avons créé. Dans le troisième groupe d’expérimentations (Exp3), nous avons comparé chacun des sous-corpus entre eux. Après avoir nettoyé les listes de candidats-termes ainsi obtenues pour ne retenir que les termes du domaine de l’informatique, et généré des données sur la variation de la fréquence et de la spécificité relative des termes entre les sous-corpus, nous avons procédé à la validation de la nouveauté et de l’obsolescence des premiers termes de chaque liste pour déterminer si la méthode proposée fonctionne mieux avec un type de changement lexical (nouveauté ou obsolescence), une partie du discours (termes nominaux, termes verbaux et termes adjectivaux) ou un groupe d’expérimentations. Les résultats de la validation montrent que la méthode semble mieux convenir à l’extraction des néologismes qu’à l’extraction des nécrologismes. De plus, nous avons obtenu de meilleurs résultats pour les termes nominaux et adjectivaux que pour les termes verbaux. Enfin, nous avons obtenu beaucoup plus de résultats avec l’Exp1 qu’avec l’Exp2 et l’Exp3. / In this study, we propose a method to observe lexical changes (neology and necrology) in English in the field of computer science in short-period diachrony. Since computer science evolves quickly, we believe that a short-period diachronic approach (over a period of 10 years) lends itself to studying the terminology of that field. For this purpose, we built a corpus in English with articles taken from computer science magazines for the general public, PC Magazine and PC World, covering the years 2001 to 2010. The corpus was divided into two subcorpora: 2001-2005 and 2006-2010. We chose year 2006 as a pivot, because Facebook (the most popular social network) has been open to the public since that year, and we believed that would cause quick changes in computer science terminology. For each of the magazines, we selected one issue per year from 2001 to 2010, for a total of about 540,000 words for the 2001-2005 subcorpus and about 390,000 words for the 2006-2010 subcorpus. Each subcorpus was submitted to term extractor TermoStat to extract nominal, verbal and adjectival term candidates. We proceeded to three experiment groups, according to the reference corpus used. In the first experiment group (Exp1), we compared each subcorpus to the default reference corpus in TermoStat for English, a British National Corpus (BNC) extract. In the second experiment group (Exp2), we compared each subcorpus to the whole computer science corpus we created. In the third experiment group (Exp3), we compared the two subcorpora with each other. After cleaning up the term candidates lists thus obtained to retain only the terms in the field of computer science, and generating data about relative frequency and relative specificity of the terms between subcorpora, we proceeded to the validation of novelty and obsolescence of the first terms of each list to determine whether the proposed method works better with a particular type of lexical change (novelty or obsolescence), part of speech (nominal, verbal or adjectival term), or experiment group. The validation results show that the method seems to work better with neology extraction than with necrology extraction. Also, we had better results with nominal and adjectival terms than with verbal terms. Finally, we had much more results with Exp1 than with Exp2 and Exp3.
39

Étude sur l'équivalence de termes extraits automatiquement d'un corpus parallèle : contribution à l'extraction terminologique bilingue

Le Serrec, Annaïch January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
40

Segmentação dos nódulos pulmonares através de interações baseadas em gestos / Segmentation of pulmonary nodules through interactions based on in gestures

SOUSA, Héber de Padua 29 January 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-16T21:13:39Z No. of bitstreams: 1 HeberSousa.pdf: 2248069 bytes, checksum: e89eac1d4562ac1f2f53007d699f9c71 (MD5) / Made available in DSpace on 2017-08-16T21:13:39Z (GMT). No. of bitstreams: 1 HeberSousa.pdf: 2248069 bytes, checksum: e89eac1d4562ac1f2f53007d699f9c71 (MD5) Previous issue date: 2013-01-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Lung cancer is one of the most common of malignant tumors. It also has one of the highest rates of mortality among cancers. The reason for this is mainly linked to late diagnosis of the disease. For early detection of disease is very helpful to use medical images as support, the most important being, CT. With the acquisition of digital images is becoming more common to use computer systems for medical imaging. These systems assist in the clinical diagnosis, disease monitoring, and in some cases is used as a support for surgery. Because the search for new ways of human-computer interaction, natural interaction arises, which aims to provide a form of control with higher cognition. This control is usually performed using gestures. Interactions of gestures can be useful in controlling medical imaging systems and can ensure necessary sterility in operating rooms, because they are not required contacts manuals. Among the activities computer assisted important for the treatment of lung cancer, we have the segmentation of nodules. The segmentation of nodules can be performed automatically, semiautomatically or interactively. It is useful to speed up the diagnostic process, taking measurements, or observe the morphological appearance of the nodule. The objective of this study is to investigate the use of natural interaction interface for activities such as medical image visualization and segmentation of pulmonary nodules. The paper proposes the study of interaction techniques based on gestures to segment nodules in an interactive and semiautomatic. Finally, conducting experiments to evaluate the techniques proposed in the items ease of use, intuitiveness, accuracy and comfortability / O câncer de pulmão é um dos mais comuns dentre os tumores malignos. Ele também possui uma das taxas mais altas de mortalidade dentre os tipos de câncer. O motivo disso está ligado principalmente ao diagnóstico tardio da doença. Para a sua detecção precoce é muito útil a utilização de imagens médicas como apoio, sendo a mais importante, a tomografia computadorizada. Com a aquisição digital das imagens está cada vez mais comum a utilização de sistemas computacionais de visualização médica. Estes sistemas auxiliam no diagnóstico clínico, no acompanhamento de doenças, e em alguns casos é utilizado como apoio a cirurgias. Em virtude da busca por novos meios de interação humano-computador, surge a interação natural, que objetiva uma forma de controle mais próximo cognitivamente das ações realizadas, e geralmente é realizada através de gestos. Interações por gestos podem ser úteis no controle de sistemas de visualização médica e podem garantir a esterilização necessária em salas cirúrgicas, pois não são necessários contatos manuais. Dentre as atividades assistidas por computador importantes para o tratamento do câncer pulmonar, temos a segmentação de nódulos. A segmentação de nódulos pode ser realizada de forma automática, semiautomática ou interativamente. Elas são úteis para agilizar o processo de diagnóstico, realizar medições, ou observar o aspecto morfológico do nódulo. O objetivo do presente trabalho é investigar a utilização da interação natural como interface para atividades de visualização de imagens médicas e segmentação de nódulos pulmonares. Foi implementada uma série de ferramentas de segmentação, interativas e semiautomáticas, controladas a partir de gestos. Estes gestos foram desenvolvidos a partir de imagens capturadas por uma câmera especial chamada Kinect, que traduz a imagem em mapas de profundidade, podendo medir com precisão a distância de objetos na cena. Ao final do estudo, foi realizado experimentos para avaliar as técnicas propostas nos quesitos facilidade de uso, intuitividade, conforto e precisão.

Page generated in 0.0764 seconds