• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 600
  • 210
  • 136
  • 54
  • 52
  • 41
  • 20
  • 19
  • 16
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 1466
  • 135
  • 133
  • 107
  • 102
  • 99
  • 95
  • 82
  • 82
  • 80
  • 76
  • 72
  • 72
  • 71
  • 70
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

A comparison of the rate and accuracy of symbol location on visual displays using colour-coded alphabetic and categorisation strategies in Grade 1 to 3 children

Herold, M.P. (Marina Patricia) 14 October 2012 (has links)
The ability to locate symbols on a visual display forms an integral part of the effective use of AAC systems. Characteristics of display design and perceptual features of symbols have been shown to influence rate and accuracy of symbol location (Thistle&Wilkinson, 2009; Wilkinson, Carlin,&Jagaroo, 2006). The current study endeavoured to compare the use of two colour-coded organisational strategies (alphabetical order and categorisation) for their effectiveness in symbol location and to investigate if some bottom-up features influenced the performance of the participants in these tasks. 114 learners in Grade 1 to 3 in a mainstream school were randomly divided into two groups. Both of the groups were exposed to two visual search tests in alternating order. The tests involved searching for 36 visual targets amongst 81 coloured Picture Communication Symbols on a computer screen in one of two colour-coded organizational methods, namely alphabetical order or categorisation. The data from the research task was collected through computer logging of all mouse selections. Findings showed that locating symbols on a computer screen with a categorisation strategy was significantly faster and more accurate than with an alphabetical strategy for the Grade 1 to 3 participants. The rate and accuracy of target symbol location in both the strategies decreased significantly as grade increased, as did the differences between rate and accuracy of target location when using the two strategies. It was also found that although the tests in this study placed heavy top-down processing demands on the participants, there was still evidence of bottom-up factors influencing their performance. Implications for display design in AAC clinical practice were discussed. / Thesis (PhD)--University of Pretoria, 2012. / Centre for Augmentative and Alternative Communication / unrestricted
842

Development and Validation of an Administrative Data Algorithm to Identify Adults who have Endoscopic Sinus Surgery for Chronic Rhinosinusitis

Macdonald, Kristian I January 2016 (has links)
Objective: 1) Systematic review on the accuracy of Chronic Rhinosinusitis (CRS) identification in administrative databases; 2) Develop an administrative data algorithm to identify CRS patients who have endoscopic sinus surgery (ESS). Methods: A chart review was performed for all ESS surgical encounters at The Ottawa Hospital from 2011-12. Cases were defined as encounters in which ESS for performed for Otolaryngologist-diagnosed CRS. An algorithm to identify patients who underwent ESS for CRS was developed using diagnostic and procedural codes within health administrative data. This algorithm was internally validated. Results: Only three studies meeting inclusion criteria were identified in the systematic review and showed inaccurate CRS identification. The final algorithm using administrative and chart review data found that encounters having at least one CRS diagnostic code and one ESS procedural code had excellent accuracy for identifying ESS: sensitivity 96.0% sensitivity, specificity 100%, and positive predictive value 95.4%. Internal validation showed similar accuracy. Conclusion: Most published AD studies examining CRS do not consider the accuracy of case identification. We identified a simple algorithm based on administrative database codes accurately identified ESS-CRS encounters.
843

Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique / Analytical approach for evaluation of the fixed point accuracy

Chakhari, Aymen 07 October 2014 (has links)
Par rapport à l'arithmétique virgule flottante, l'arithmétique virgule fixe se révèle plus avantageuse en termes de contraintes de coût et de consommation, cependant la conversion en arithmétique virgule fixe d'un algorithme spécifié initialement en virgule flottante se révèle être une tâche fastidieuse. Au sein de ce processus de conversion, l'une des étapes majeures concerne l'évaluation de la précision de la spécification en virgule fixe. En effet, le changement du format des données de l'application s'effectue en éliminant des bits ce qui conduit à la génération de bruits de quantification qui se propagent au sein du système et dégradent la précision des calculs en sortie de l'application. Par conséquent, cette perte de précision de calcul doit être maîtrisée et évaluée afin de garantir l'intégrité de l'algorithme et répondre aux spécifications initiales de l'application. Le travail mené dans le cadre de cette thèse se concentre sur des approches basées sur l'évaluation de la précision à travers des modèles analytiques (par opposition à l'approche par simulations). Ce travail traite en premier lieu de la recherche de modèles analytiques pour évaluer la précision des opérateurs non lisses de décision ainsi que la cascade d'opérateurs de décision. Par conséquent, la caractérisation de la propagation des erreurs de quantification dans la cascade d'opérateurs de décision est le fondement des modèles analytiques proposés. Ces modèles sont appliqués à la problématique de l'évaluation de la précision de l'algorithme de décodage sphérique SSFE (Selective Spanning with Fast Enumeration) utilisé pour les systèmes de transmission de type MIMO (Multiple-Input Multiple-Output). Dans une seconde étape, l'évaluation de la précision des structures itératives d'opérateurs de décision a fait l'objet d'intérêt. Une caractérisation des erreurs de quantification engendrées par l'utilisation de l'arithmétique en virgule fixe est menée afin de proposer des modèles analytiques basés sur l'estimation d'une borne supérieure de la probabilité d'erreur de décision ce qui permet de réduire les temps d'évaluation. Ces modèles sont ensuite appliqués à la problématique de l'évaluation de la spécification virgule fixe de l'égaliseur à retour de décision DFE (Decision Feedback Equalizer). Le second aspect du travail concerne l'optimisation des largeurs de données en virgule fixe. Ce processus d'optimisation est basé sur la minimisation de la probabilité d'erreur de décision dans le cadre d'une implémentation sur un FPGA (Field-Programmable Gate Array) de l'algorithme DFE complexe sous contrainte d'une précision donnée. Par conséquent, pour chaque spécification en virgule fixe, la précision est évaluée à travers les modèles analytiques proposés. L'estimation de la consommation des ressources et de la puissance sur le FPGA est ensuite obtenue à l'aide des outils de Xilinx pour faire un choix adéquat des largeurs des données en visant à un compromis précision/coût. La dernière phase de ce travail traite de la modélisation en virgule fixe des algorithmes de décodage itératif reposant sur les concepts de turbo-décodage et de décodage LDPC (Low-Density Parity-Check). L'approche proposée prend en compte la structure spécifique de ces algorithmes ce qui implique que les quantités calculées au sein du décodeur (ainsi que les opérations) soient quantifiées suivant une approche itérative. De plus, la représentation en virgule fixe utilisée (reposant sur le couple dynamique et le nombre de bits total) diffère de la représentation classique qui, elle, utilise le nombre de bits accordé à la partie entière et la partie fractionnaire. Avec une telle représentation, le choix de la dynamique engendre davantage de flexibilité puisque la dynamique n'est plus limitée uniquement à une puissance de deux. Enfin, la réduction de la taille des mémoires par des techniques de saturation et de troncature est proposée de manière à cibler des architectures à faible-complexité. / Traditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two.
844

Finite Difference and Discontinuous Galerkin Methods for Wave Equations

Wang, Siyang January 2017 (has links)
Wave propagation problems can be modeled by partial differential equations. In this thesis, we study wave propagation in fluids and in solids, modeled by the acoustic wave equation and the elastic wave equation, respectively. In real-world applications, waves often propagate in heterogeneous media with complex geometries, which makes it impossible to derive exact solutions to the governing equations. Alternatively, we seek approximated solutions by constructing numerical methods and implementing on modern computers. An efficient numerical method produces accurate approximations at low computational cost. There are many choices of numerical methods for solving partial differential equations. Which method is more efficient than the others depends on the particular problem we consider. In this thesis, we study two numerical methods: the finite difference method and the discontinuous Galerkin method. The finite difference method is conceptually simple and easy to implement, but has difficulties in handling complex geometries of the computational domain. We construct high order finite difference methods for wave propagation in heterogeneous media with complex geometries. In addition, we derive error estimates to a class of finite difference operators applied to the acoustic wave equation. The discontinuous Galerkin method is flexible with complex geometries. Moreover, the discontinuous nature between elements makes the method suitable for multiphysics problems. We use an energy based discontinuous Galerkin method to solve a coupled acoustic-elastic problem.
845

Efficient Simulation of Wave Phenomena

Almquist, Martin January 2017 (has links)
Wave phenomena appear in many fields of science such as acoustics, geophysics, and quantum mechanics. They can often be described by partial differential equations (PDEs). As PDEs typically are too difficult to solve by hand, the only option is to compute approximate solutions by implementing numerical methods on computers. Ideally, the numerical methods should produce accurate solutions at low computational cost. For wave propagation problems, high-order finite difference methods are known to be computationally cheap, but historically it has been difficult to construct stable methods. Thus, they have not been guaranteed to produce reasonable results. In this thesis we consider finite difference methods on summation-by-parts (SBP) form. To impose boundary and interface conditions we use the simultaneous approximation term (SAT) method. The SBP-SAT technique is designed such that the numerical solution mimics the energy estimates satisfied by the true solution. Hence, SBP-SAT schemes are energy-stable by construction and guaranteed to converge to the true solution of well-posed linear PDE. The SBP-SAT framework provides a means to derive high-order methods without jeopardizing stability. Thus, they overcome most of the drawbacks historically associated with finite difference methods. This thesis consists of three parts. The first part is devoted to improving existing SBP-SAT methods. In Papers I and II, we derive schemes with improved accuracy compared to standard schemes. In Paper III, we present an embedded boundary method that makes it easier to cope with complex geometries. The second part of the thesis shows how to apply the SBP-SAT method to wave propagation problems in acoustics (Paper IV) and quantum mechanics (Papers V and VI). The third part of the thesis, consisting of Paper VII, presents an efficient, fully explicit time-integration scheme well suited for locally refined meshes.
846

Does the Provision of an Intensive and Highly Focused Indirect Corrective Feedback Lead to Accuracy?

Jhowry, Kheerani 05 1900 (has links)
This thesis imparts the outcomes of a seven-week long quasi-experimental study that explored whether or not L2 learners who received intensive and highly focused indirect feedback on one type of treatable error - either the third person singular -s, plural endings -s, or definite article the - eventually become more accurate in the post-test as compared to a control group that did not. The paired-samples t-test comparing the pre-test and post-test scores of both groups demonstrates that the experimental group did no better than the control group after they received indirect corrective feedback. The independent samples t-test measuring the experimental and control group's accuracy shows no significant difference between the two groups. Effect sizes calculated, however, do indicate that, had the sample sizes been bigger, both groups would have eventually become more accurate in the errors targeted, although this would not have been because of the indirect feedback.
847

Georeferering av ortofoto med UAV : En jämförelsestudie mellan direkt och indirekt georeferering

Abdi, Joan, Joel, Johansson January 2020 (has links)
UAV (Unmanned Aircraft Vehicle) har revolutiontionerat ortofotoframställningen med sitt bidrag till ökad säkerhet, lägre kostnader samt effektivare arbetsgång vid framställning av ortofoton. Den traditionella flygfotogrammetrin med flygplan och utplacering av flygsignaler har varit den givna metoden i många år. Att flyga med UAV istället för flygplan sparar tid och pengar däremot är utplacering och inmätning av flygsignaler fortfarande tidskrävande och därför kostsamt. Företaget DJI har tagit fram en ny UAV med namnet DJI Phantom 4 RTK vilken stödjer möjligheten att använda satellitbaserad positionering för direkt georeferering. Den här studien har jämfört två olika georefereringsmetoder för framställning av ortofoton med UAV: direkt georeferering med NRTK (satellitbaserad positionering och nätverks-RTK) samt indirekt georeferering med olika antal markstödspunkter. Studien utfördes vid Högskolan i Gävle på en yta av åtta hektar. En undersökning av avvikelser i plan och höjd resulterade i acceptabla värden enligt de riktlinjer som följdes i HMK – Ortofoto (2017) samt de kontroller som genomfördes enligt SIS-TS 21144:2016. RMS-värdet i plan för den indirekta georefereringsmetoden ligger på 0,0102m. För den direkta georefereringsmetoden ligger RMS-värdet i plan vid användning av markstödpunkter mellan 0,0132 och 0,0148 m. Slutligen för den direkta georefereringsmetoden utan markstödpunkter är RMS-värdet i plan på 0,0136 m. RMS i höjd ligger inom intervallet 0,008-0,025 m. Det som redovisas i studien visar att en accepterad kvalitet av ortofoton går att erhålla baserat på de RMS-värden i plan och höjd med samtliga georefereringsmetoder som testats. Efter genomförda kontroller och utvärdering av de resultat kan det konstateras att de olika georefereringsmetoderna skiljer inte mycket åt varandra kvalitetsmässigt.Dock är den direkta georefereringsmetoden utan markstödpunkter mycket effektivare ur ett tidsperspektiv. Phantom 4 RTK är ny på marknaden och det behöver utföras mer forskning för att få en större insikt av dess potential. Dock krävs det mer forskning kring direkt georeferering för utvärdering av orotofotons kvalitet. / UAV (Unmanned Aircraft Vehicle) has revolutionized the creation of orthophotos with its contribution to increased safety, lower costs and more effective ways when making orthophotos. The traditional aerial photogrammetry with airplanes and placement of flight signals has been the standard method for years. To fly with UAV instead of an airplane is cheaper and saves time, however, the placement and measurements of flight signals is still time consuming and therefore expensive. The company DJI has developed a new UAV called Phantom 4 RTK that supports satellite based technology for direct georeferercing. This study compared two different measuring methods when producing orthophotos with UAV: direct georeferencing with NRTK (Network Real Time Kinematic) and indirect georeferencing when using different number of Ground Control Points (GCP). The study was conducted at the University of Gävle over an area of eight hectares. An investigation of the deviation in plane and height resulted in acceptable units based on the guidelines that were followed in HMK – Ortofoto and the controls that were followed from SIS- TS 21144:2016. The RMS value in plane for the indirect georeferencing method is 0,0102 m. For the direct georeferencing method the RMS value in plane when using ground control points is between 0,0132 and 0,0148 m. At last the RMS value for the direct georeferencing method without ground control points is 0,0136m. The RMS value in height is between the intervals 0,008-0,025 m. The data presented in this study show that an accepted quality in the orthophotos can be acquired based on the RMS values in plane and height for every georeferencing that was tested. After accomplished controls and evaluation the results show that the different georeferencing methods doesn´t differantiate too much from each other based on their quality. However, the direct georeferencing method with ground control points is more effective from a time perspective. Phantom 4 RTK is new on the market and more research is necessary in order to understand the potential of this technology and its posibility to integrate into society. More research is recquired for the direct georeferencing method in order to evaluate the quality of orthophotos.
848

Efeito da fotopolimerização complementar em resinas para impressoras por estereolitografia em suas propriedades mecânicas e diferentes designs de impressão na precisão de modelos odontológicos /

Almeida, Layene Figueiredo. January 2020 (has links)
Orientador: Lidia Parsekian Martins / Resumo: OBJETIVO: Identificar e quantificar diferenças nas propriedades mecânicas de quatro resinas para impressão por estereolitografia em função do tempo de exposição à luz ultravioleta pós impressão e quantificar diferenças dimensionais em modelos odontológicos produzidos por estereolitografia em relação à inserção de barra estabilizadora transversal na região posterior, à orientação de impressão, vertical ou horizontal, tipo de modelo, oco ou maciço, e pós cura com luz ultravioleta ou não, por meio de medições lineares. MÉTODOS: 140 espécimes foram impressos em impressoras 3D do tipo SLA (estereolitografia a laser) e DLP (estereolitografia por processamento digital de luz) para cada teste mecânico realizado e foram divididos em 28 grupos, de acordo com a resina (4 tipos: Blue, Gray, Surgical Guide, Standard) e o tempo de pós cura com luz ultravioleta (7 tempos: 0, 5, 10, 15, 30, 60 e 120 minutos). Foram realizados os testes de microdureza Vickers, tração diametral e flexão três pontos. Os dados obtidos foram submetidos a uma ANOVA de dois níveis e ao pós teste de Tukey. Para a avaliação das alterações dimensionais de modelos odontológicos foram realizados dois experimentos distintos. No primeiro, 56 modelos odontológicos do arco maxilar em formato de ferradura foram impressos na impressora MoonRay S100 (SprintRay) com a resina Gray (SprintRay), variando a orientação de impressão, a presença de barra e o processo de pós cura. Depois de impressos foram escaneados em escâner de mesa... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: OBJECTIVE: To identify and quantify differences in the mechanical properties of four resins for stereolithography printing varying time of exposure to ultraviolet light and to quantify dimensional differences in dental models produced by stereolithography regarding the insertion of a transverse stabilizer bar in the posterior region, the printing orientation, vertical or horizontal, model type, hollow or solid, and post-cure with ultraviolet light or not, by means of linear measurements. METHODS: 140 specimens were printed by SLA (laser stereolithography) and DLP (digital light processing stereolithography) printers for each mechanical test performed and were divided into 28 groups, according to the resin (4 types: Blue, Gray, Surgical Guide, Standard) and the post-cure time with ultraviolet light (7 times: 0, 5, 10, 15, 30, 60 and 120 minutes). Vickers microhardness tests, diametrical traction and three point flexion were performed. The data obtained were submitted to a two-way ANOVA and the Tukey post-test. To evaluate the dimensional changes of dental models, two different experiments were carried out. In the first, 56 horseshoe-shaped dental models of a maxillary arch were printed on the MoonRay S100 (SprintRay) with Gray resin (SprintRay), varying the printing orientation, the presence of a bar and the post-curing process. After printing, they were scanned on a R700 table scanner (3Shape) and then measurements between canines, first and second premolars and molars and ar... (Complete abstract click electronic access below) / Doutor
849

Capacity Management in Hyper-Scale Datacenters using Predictive Modelling

Ruci, Xhesika January 2019 (has links)
Big data applications have become increasingly popular with the emerge of cloud computing and the explosion of artificial intelligence. Hence, the increasing adoption of data-hungry machines and services is driving the need for more power to keep the datacenters of the world running. It has become crucial for large IT companies such as Google, Facebook, Amazon etc. to monitor the energy efficiency of their datacenters’ facilities and take actions on optimization of these heavy consumers of electricity. This master thesis work proposes several predictive models to forecast PUE (Power Usage Effectiveness), regarded as the industry-de-facto metric for measuring datacenter’s IT power efficiency. This approach is a novel capacity management technique to predict and monitor the environment in order to prevent future disastrous events, which are strictly unacceptable in datacenter’s business.
850

Industrial Hygiene Exposure Estimation Accuracy: An Investigation of Micro-Environmental Factors Impacting Exposure

Eturki, Mohamed 01 October 2019 (has links)
No description available.

Page generated in 0.1979 seconds