• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 22
  • 20
  • 14
  • 10
  • 10
  • 8
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 218
  • 33
  • 31
  • 23
  • 22
  • 21
  • 19
  • 18
  • 18
  • 18
  • 16
  • 16
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Predicting Muscle Activations in a Forward-Inverse Dynamics Framework Using Stability-Inspired Optimization and an In Vivo-Based 6DoF Knee Joint

Potvin, Brigitte January 2016 (has links)
Modeling and simulations are useful tools to help understand knee function and injuries. As there are more muscles in the human knee joint than equations of motion, optimization protocols are required to solve a problem. The purpose of this thesis was to improve the biofidelity of these simulations by adding in vivo constraints derived from experimental intra-cortical pin data and stability-inspired objective functions within an OpenSim-Matlab forward-inverse dynamics simulation framework on lower limb muscle activation predictions. Results of this project suggest that constraining the model knee joint’s ranges of motion with pin data had a significant impact on lower limb kinematics, especially in rotational degrees of freedom. This affected muscle activation predictions and knee joint loading when compared to unconstrained kinematics. Furthermore, changing the objective will change muscle activation predictions although minimization of muscle activation as an objective remains more accurate than the stability inspired functions, at least for gait. /// La modélisation et les simulations in-silico sont des outils importants pour approfondir notre compréhension de la fonction du genou et ses blessures. Puisqu’il y a plus de muscles autour du genou humain que d’équations de mouvement, des procédures d’optimisation sont requises pour résoudre le système. Le but de cette thèse était d’explorer l’effet de changer l’objectif de cette optimisation à des fonctions inspirées par la stabilité du genou par l’entremise d’un cadre de simulation de dynamique directe et inverse utilisant MatLab et OpenSim ainsi qu'un model musculo-squelétaire contraint cinématiquement par des données expérimentales dérivées de vis intra-corticales, sur les prédictions d’activation musculaire de la jambe. Les résultats de ce projet suggèrent que les contraintes de mouvement imposées sur le genou modélisé ont démontré des effets importants sur la cinématique de la jambe et conséquemment sur les prédictions d'activation musculaire et le chargement du genou. La fonction objective de l'optimisation change aussi les prédictions d’activations musculaires, bien que la fonction minimisant la consommation énergétique soit la plus juste, du moins pour la marche.
82

Opacité des artefacts d'un système Workflow / Opacity of artifacts in Workflow system

Diouf, Mohamadou Lamine 10 October 2014 (has links)
Une propriété d'un objet est dite opaque pour un observateur si celui-ci ne peut déduire que la propriété est satisfaite sur la base de l'observation qu'il a de cet objet. Supposons qu'un certain de nombre de propriétés (appelées secrets) soient attachées à chaque intervenant d'un système, nous dirons alors que le système lui-même est opaque si chaque secret d'un observateur lui est opaque : il ne peut percer aucun des secrets qui lui ont été attachés. L'opacité a été étudié préalablement dans le contexte des systèmes à événements discrets où différents jeux d'hypothèses ont pu être identifiés pour lesquels on pouvait d'une part décider de l'opacité d'un système et d'autre part développer des techniques pour diagnostiquer et/ou forcer l'opacité. Cette thèse constitue la première contribution au problème de l'opacité des artefacts d'un système à flots de tâches (système workflow). Notre propos est par conséquent de formaliser ce problème en dégageant les hypothèses qui doivent être posées sur ces systèmes pour que l'opacité soit décidable. Nous indiquons quelques techniques pour assurer l'opacité d'un système. / A property (of an object) is opaque to an observer when he or she cannot deduce the property from its set of observations. If each observer is attached to a given set of properties (the so-called secrets), then the system is said to be opaque if each secret is opaque to the corresponding observer. Opacity has been studied in the context of discrete event dynamic systems where technique of control theory were designed to enforce opacity. This thesis is the first attempt to formalize opacity of artifacts in data-centric workflow systems. We motivate this problem and give some assumptions that guarantee the decidability of opacity. Some techniques for enforcing opacity are indicated.
83

Joint center estimation by single-frame optimization

Frick, Eric 01 December 2018 (has links)
Joint center location is the driving parameter for determining the kinematics, and later kinetics, associated with human motion capture. Therefore the accuracy with which said location is determined is of great import to any and all subsequent calculation and analysis. The most significant barrier to accurate determination of this parameter is soft tissue artifact, which contaminates the measurements of on-body measurement devices by allowing them to move relative to the underlying rigid bone. This leads to inaccuracy in both bone pose estimation and joint center location. The complexity of soft tissue artifact (it is nonlinear, multimodal, subject-specific, and trial specific) makes it difficult to model, and therefore difficult to mitigate. This thesis proposes a novel method, termed Single Frame Optimization, for determining joint center location (though mitigation of soft tissue artifact) via a linearization approach, in which the optimal vector relating a joint center to a corresponding inertial sensor is calculated at each time frame. This results in a time-varying joint center location vector that captures the relative motion due to soft tissue artifact, from which the relative motion could be isolated and removed. The method’s, and therefore the optimization’s, driving assumption is that the derivative terms in the kinematic equation are negligible relative to the rigid terms. More plainly, it is assumed that any relative motion can be assumed negligible in comparison with the rigid body motion in the chosen data frame. The validity of this assumption is investigated in a series of numerical simulations and experimental investigations. Each item in said series is presented as a chapter in this thesis, but retains the format of a standalone article. This is intended to foment critical analysis of the method at each stage in its development, rather than solely in its practical (and more developed) form.
84

Kompenzace obrazových artefaktů v HDR obrazu / HDR Image Artifact Compensation

Müllerová, Věra January 2015 (has links)
Tato diplomová práce se zabývá syntézou HDR obrazu (High Dynamic Range Imaging). HDRI technologie se stala v posledních letech velice populární. Běžný a nejvíce používaný způsob vytvoření HDR obrazu je spojení více snímků stejné scény pořízených pomocí různých expozičních časů. Tato technika funguje správně pouze v případě, že se jedná o statickou scénu. Pokud je však ve scéně nějaký pohyb ve chvíli, kdy se pořizují snímky dané scény, výsledný HDR obraz obsahuje artefakty zvané jako duchy. V této práci jsou prezentovány základní informace o HDRI se zaměřením na metody odstraňující artefakty z HDR obrazů. Práce shrnuje již existující metody a dvě z nich - tzv. bitmap movement detection a histogram based ghost detection - představuje jako vhodné pro použití v real-time skládání HDR obrazu a pro implementaci na FPGA (Field-Programmable Gate Array) architektuře. Tyto metody jsou v práci implementovány v programovacím jazyce C++ jako prototypy. Navíc je zde navržena modifikace metody založené na výpočtu histogramu pro jednodušší a efektivnější implementaci na FPGA architektuře.
85

Data and image domain deep learning for computational imaging

Ghani, Muhammad Usman 22 January 2021 (has links)
Deep learning has overwhelmingly impacted post-acquisition image-processing tasks, however, there is increasing interest in more tightly coupled computational imaging approaches, where models, computation, and physical sensing are intertwined. This dissertation focuses on how to leverage the expressive power of deep learning in image reconstruction. We use deep learning in both the sensor data domain and the image domain to develop new fast and efficient algorithms to achieve superior quality imagery. Metal artifacts are ubiquitous in both security and medical applications. They can greatly limit subsequent object delineation and information extraction from the images, restricting their diagnostic value. This problem is particularly acute in the security domain, where there is great heterogeneity in the objects that can appear in a scene, highly accurate decisions must be made quickly, and the processing time is highly constrained. Motivated primarily by security applications, we present a new deep-learning-based MAR approach that tackles the problem in the sensor data domain. We treat the observed data corresponding to dense, metal objects as missing data and train an adversarial deep network to complete the missing data directly in the projection domain. The subsequent complete projection data is then used with an efficient conventional image reconstruction algorithm to reconstruct an image intended to be free of artifacts. Conventional image reconstruction algorithms assume that high-quality data is present on a dense and regular grid. Using conventional methods when these requirements are not met produces images filled with artifacts that are difficult to interpret. In this context, we develop data-domain deep learning methods that attempt to enhance the observed data to better meet the assumptions underlying the fast conventional analytical reconstruction methods. By focusing learning in the data domain in this way and coupling the result with existing conventional reconstruction methods, high-quality imaging can be achieved in a fast and efficient manner. We demonstrate results on four different problems: i) low-dose CT, ii) sparse-view CT, iii) limited-angle CT, and iv) accelerated MRI. Image domain prior models have been shown to improve the quality of reconstructed images, especially when data are limited. A novel principled approach is presented allowing the unified integration of both data and image domain priors for improved image reconstruction. The consensus equilibrium framework is extended to integrate physical sensor models, data models, and image models. In order to achieve this integration, the conventional image variables used in consensus equilibrium are augmented with variables representing data domain quantities. The overall result produces combined estimates of both the data and the reconstructed image that is consistent with the physical models and prior models being utilized. The prior models used in both image and data domains in this work are created using deep neural networks. The superior quality allowed by incorporating both data and image domain prior models is demonstrated for two applications: limited-angle CT and accelerated MRI. A major question that arises in the use of neural networks and in particular deep networks is their stability. That is, if the examples seen in the application environment differ from the training environment will the performance be robust. We perform an empirical stability analysis of data and image domain deep learning methods developed for limited-angle CT reconstruction. We consider three types of perturbations to test stability: adversarially optimized, random, and structural perturbations. Our empirical analysis reveals that the data-domain learning approach proposed in this dissertation is less susceptible to perturbations as compared to the image-domain post-processing approach. This is a very encouraging result and strongly supports the main argument of this dissertation that there is value in using data-domain learning and it should be a part of our computational imaging toolkit.
86

Executable business process modeling as a tool for increasing the understanding of business processes in an organization

Demir, Ersin January 2014 (has links)
Understanding of business processes is becoming an important key factor for successful businesses and today many organizations are facing the lack of knowledge about the business processes that they are working on. Since the interaction between different business processes and different actors are becoming more common it is not only enough for employees to have knowledge about the business processes that they involve directly, but also they need to know about the other business processes in the organization. Unfortunately there is not enough research on this topic in the literature and the goal of this thesis is to propose a method for solving the indicated problem by increasing the understanding of business processes by the employees in an organization. The proposed method basically contains the design and execution of process models based on a real scenario. A case study has been conducted at an IT company where the employees have no or limited knowledge about the business processes that their organization works on. Even though the method has been only tested in one organization it is generic and can be applied to any other similar organization. The design science approach is used to develop the method and build the process models as artifacts. The process models have been verified by using an executable business modeling tool, iPB, and presented to the employees in a seminar in order to make them to understand the business processes better. The knowledge of the employees about the business processes has been analyzed before and after the presentation, thus we could compare the results and find out how much their knowledge has increased after the presentation. The results have shown that the knowledge of the employees has increased significantly. In conclusion, the method of design and presentation of executable business process models has been proved to be a solution to the problem of not understanding of business processes in an organization well enough.
87

1:1 Digital devices and preparatory school teachers’ classroom practices

Dumas Kuchling, Janine January 2020 (has links)
In this study, the influence of a 1:1 digital device on South African preparatory school teachers’ perceptions regarding their classroom practices is described. The focus is on the Chromebook as an ‘artifact’ of learning. Digital technology is becoming prevalent in all education spheres and, subsequently, interest in this topic is growing. In order to create an environment where optimal learning takes place, teachers and pupils should adapt their learning and teaching methods to embrace the effects of technology. Teachers are at the forefront of education and education trends involving digital devices are becoming a reality across all grades. Qualitative research was conducted to gain insight into eight teachers’ perceptions on using a 1:1 digital device (the Chromebook) for teaching and learning in a private Gauteng school. The major findings were that teachers had to adapt their preparation, facilitation and assessment strategies to accommodate the use of the Chromebook in the classroom. This was mostly done successfully by the participants. The teachers realised that the Chromebook is a useful learning and teaching artifact or learning and teaching support material as a tool in the classroom. It enhances multimodal learning, encourages the inclusion of multiliteracies, and creates a third space of learning, where teachers and pupils cooperate in constructing new knowledge. A concern addressed by the teachers was that digital learning would have a negative impact on writing skills. They also stated that there should be a balance between technology and traditional teaching methods. The most important recommendations are that teachers should change their attitude and their preparation and implementation of lessons when using the digital device in the classroom. Teachers should realise that pupils whose parents have the financial means and who have access to trending technology, today’s digital natives, have instant access to information and this has changed the way learning takes place. Although new to some teachers, the use of digital devices is second nature for many pupils of the 21st century. Teachers should embrace opportunities for professional development so that the digital device can be effectively incorporated in the learning process in the classroom. / Dissertation (MEd)--University of Pretoria 2020. / pt2021 / Humanities Education / MEd / Unrestricted
88

Aesthetic Response to the Fires at Notre Dame: A Case for Rhetorical Aesthetics Within Conventional Rhetorical Analysis

Clifford, Amanda 29 March 2022 (has links)
The field of rhetorical aesthetics has a long and rich history. Despite that history, however, aesthetic artifacts have yet to be considered with the same weight that conventional rhetorical artifacts are. My project is to consider the rhetorical effectiveness of aesthetic artifacts, making a case for more inclusion of these types of artifacts in rhetorical theory. I will demonstrate the effectiveness of the aesthetic by performing a comparative analysis of both an aesthetic and conventional reaction to the 2019 fires at Notre Dame de Paris. By considering the constitutive power of the aesthetic, I will argue that the depth of analysis that the aesthetic allows makes it, in some cases, a more effective space for rhetorical analysis than conventional artifacts.
89

Understanding Test-Artifact Quality in Software Engineering

Tran, Huynh Khanh Vi January 2022 (has links)
Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place. Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering. Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective. Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach. Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.
90

Applying Dynamic Data Collection to Improve Dry Electrode System Performance for a P300-Based Brain-Computer Interface

Clements, J. M., Sellers, E. W., Ryan, D. B., Caves, K., Collins, L. M., Throckmorton, C. S. 07 November 2016 (has links)
Objective. Dry electrodes have an advantage over gel-based 'wet' electrodes by providing quicker set-up time for electroencephalography recording; however, the potentially poorer contact can result in noisier recordings. We examine the impact that this may have on brain-computer interface communication and potential approaches for mitigation. Approach. We present a performance comparison of wet and dry electrodes for use with the P300 speller system in both healthy participants and participants with communication disabilities (ALS and PLS), and investigate the potential for a data-driven dynamic data collection algorithm to compensate for the lower signal-to-noise ratio (SNR) in dry systems. Main results. Performance results from sixteen healthy participants obtained in the standard static data collection environment demonstrate a substantial loss in accuracy with the dry system. Using a dynamic stopping algorithm, performance may have been improved by collecting more data in the dry system for ten healthy participants and eight participants with communication disabilities; however, the algorithm did not fully compensate for the lower SNR of the dry system. An analysis of the wet and dry system recordings revealed that delta and theta frequency band power (0.1-4 Hz and 4-8 Hz, respectively) are consistently higher in dry system recordings across participants, indicating that transient and drift artifacts may be an issue for dry systems. Significance. Using dry electrodes is desirable for reduced set-up time; however, this study demonstrates that online performance is significantly poorer than for wet electrodes for users with and without disabilities. We test a new application of dynamic stopping algorithms to compensate for poorer SNR. Dynamic stopping improved dry system performance; however, further signal processing efforts are likely necessary for full mitigation.

Page generated in 0.0327 seconds