Spelling suggestions: "subject:"taskspecific"" "subject:"ca:specific""
1 |
High-resolution imaging using a translating coded apertureMahalanobis, Abhijit, Shilling, Richard, Muise, Robert, Neifeld, Mark 22 August 2017 (has links)
It is well known that a translating mask can optically encode low-resolution measurements from which higher resolution images can be computationally reconstructed. We experimentally demonstrate that this principle can be used to achieve substantial increase in image resolution compared to the size of the focal plane array (FPA). Specifically, we describe a scalable architecture with a translating mask (also referred to as a coded aperture) that achieves eightfold resolution improvement (or 64: 1 increase in the number of pixels compared to the number of focal plane detector elements). The imaging architecture is described in terms of general design parameters (such as field of view and angular resolution, dimensions of the mask, and the detector and FPA sizes), and some of the underlying design trades are discussed. Experiments conducted with different mask patterns and reconstruction algorithms illustrate how these parameters affect the resolution of the reconstructed image. Initial experimental results also demonstrate that the architecture can directly support task-specific information sensing for detection and tracking, and that moving objects can be reconstructed separately from the stationary background using motion priors. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
|
2 |
Den uppgiftsspecifika träningens påverkan på ADL-förmågan efter stroke : En litteraturöversikt / The effect of task-specific training on the ADL-capacity after stroke : A literature reviewFrisk, Lisa, Risarv, Marléne January 2021 (has links)
Aim: The aim with this literature review was to describe how task-specific training effects the ADL-capacity for people affected by stroke within occupational therapy intervention. Method: The data collection was conducted in two databases in medicine and health with a focus on occupational therapy and rehabilitation. The literature search was carried out in the databases CINAHL with full text and Pubmed. The inclusion criteria and quality review resulted in twelve quantitative studies. The studies were analyzed through the three steps regarding study analysis described in Friberg (2017). The analysis ended in five categories which report the result. Results: The results are reported in the categories; The definition and the nature of task-specific training, The context of the intervention and the choice of task or activity, Task-specific training in combination with another measure, The result of the task-specific training's impact on functional ability, The result of the task-specific training's impact on performance capacity. The results showed that task-specific training was defined, performed and combined in different ways in the studies. The task-specific training improved the function of the upper extremity and increased mobility and coordination. The task-specific training improved the participants performance capacity where the occupational performance and self-perceived performance capacity improved. Conclusion: The results showed that task-specific training had a positive impact on ADL-capacity since functional and performance capacity had improved for people affected by stroke. The authors believe that more research is needed on how task-specific training should be carried out as there is currently no constant description of the method. More studies should be conducted where participants exersice in activities.
|
3 |
DESCRIPTION AND ANALYSIS OF A FLEXIBLE HARDWARE ARCHITECTURE FOR EVENT-DRIVEN DISTRIBUTED SENSOR NETWORK NODESDavis, Jesse, Kyker, Ron, Berry, Nina 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A particular engineering aspect of distributed sensor networks that has not received
adequate attention is the system level hardware architecture of the individual nodes of the
network. A novel hardware architecture based on an idea of task specific modular
computing is proposed to provide for both the high flexibility and low power
consumption required for distributed sensing solutions. The power consumption of the
architecture is mathematically analyzed against a traditional approach, and guidelines are
developed for application scenarios that would benefit from using this new design.
|
4 |
A Task-Specific Approach to Computational Imaging System DesignAshok, Amit January 2008 (has links)
The traditional approach to imaging system design places the sole burden of image formation on optical components. In contrast, a computational imaging system relies on a combination of optics and post-processing to produce the final image and/or output measurement. Therefore, the joint-optimization (JO) of the optical and the post-processing degrees of freedom plays a critical role in the design of computational imaging systems. The JO framework also allows us to incorporate task-specific performance measures to optimize an imaging system for a specific task. In this dissertation, we consider the design of computational imaging systems within a JO framework for two separate tasks: object reconstruction and iris-recognition. The goal of these design studies is to optimize the imaging system to overcome the performance degradations introduced by under-sampled image measurements. Within the JO framework, we engineer the optical point spread function (PSF) of the imager, representing the optical degrees of freedom, in conjunction with the post-processing algorithm parameters to maximize the task performance. For the object reconstruction task, the optimized imaging system achieves a 50% improvement in resolution and nearly 20% lower reconstruction root-mean-square-error (RMSE ) as compared to the un-optimized imaging system. For the iris-recognition task, the optimized imaging system achieves a 33% improvement in false rejection ratio (FRR) for a fixed alarm ratio (FAR) relative to the conventional imaging system. The effect of the performance measures like resolution, RMSE, FRR, and FAR on the optimal design highlights the crucial role of task-specific design metrics in the JO framework. We introduce a fundamental measure of task-specific performance known as task-specific information (TSI), an information-theoretic measure that quantifies the information content of an image measurement relevant to a specific task. A variety of source-models are derived to illustrate the application of a TSI-based analysis to conventional and compressive imaging (CI) systems for various tasks such as target detection and classification. A TSI-based design and optimization framework is also developed and applied to the design of CI systems for the task of target detection, it yields a six-fold performance improvement over the conventional imaging system at low signal-to-noise ratios.
|
5 |
Predicting Task-specific Performance for Iterative Reconstruction in Computed TomographyChen, Baiyu January 2014 (has links)
<p>The cross-sectional images of computed tomography (CT) are calculated from a series of projections using reconstruction methods. Recently introduced on clinical CT scanners, iterative reconstruction (IR) method enables potential patient dose reduction with significantly reduced image noise, but is limited by its "waxy" texture and nonlinear nature. To balance the advantages and disadvantages of IR, evaluations are needed with diagnostic accuracy as the endpoint. Moreover, evaluations need to take into consideration the type of the imaging task (detection and quantification), the properties of the task (lesion size, contrast, edge profile, etc.), and other acquisition and reconstruction parameters. </p><p>To evaluate detection tasks, the more acceptable method is observer studies, which involve image preparation, graphical user interface setup, manual detection and scoring, and statistical analyses. Because such evaluation can be time consuming, mathematical models have been proposed to efficiently predict observer performance in terms of a detectability index (d'). However, certain assumptions such as system linearity may need to be made, thus limiting the application of the models to potentially nonlinear IR. For evaluating quantification tasks, conventional method can also be time consuming as it usually involves experiments with anthropomorphic phantoms. A mathematical model similar to d' was therefore proposed for the prediction of volume quantification performance, named the estimability index (e'). However, this prior model was limited in its modeling of the task, modeling of the volume segmentation process, and assumption of system linearity.</p><p>To expand prior d' and e' models to the evaluations of IR performance, the first part of this dissertation developed an experimental methodology to characterize image noise and resolution in a manner that was relevant to nonlinear IR. Results showed that this method was efficient and meaningful in characterizing the system performance accounting for the non-linearity of IR at multiple contrast and noise levels. It was also shown that when certain criteria were met, the measurement error could be controlled to be less than 10% to allow challenging measuring conditions with low object contrast and high image noise.</p><p>The second part of this dissertation incorporated the noise and resolution characterizations developed in the first part into the d' calculations, and evaluated the performance of IR and conventional filtered backprojection (FBP) for detection tasks. Results showed that compared to FBP, IR required less dose to achieve a threshold performance accuracy level, therefore potentially reducing the required dose. The dose saving potential of IR was not constant, but dependent on the task properties, with subtle tasks (small size and low contrast) enabling more dose saving than conspicuous tasks. Results also showed that at a fixed dose level, IR allowed more subtle tasks to exceed a threshold performance level, demonstrating the overall superior performance of IR for detection tasks.</p><p>The third part of this dissertation evaluated IR performance in volume quantification tasks with conventional experimental method. The volume quantification performance of IR was measured using an anthropomorphic chest phantom and compared to FBP in terms of accuracy and precision. Results showed that across a wide range of dose and slice thickness, IR led to accuracy significantly different from that of FBP, highlighting the importance of calibrating or expanding current segmentation software to incorporate the image characteristics of IR. Results also showed that despite IR's great noise reduction in uniform regions, IR in general had quantification precision similar to that of FBP, possibly due to IR's diminished noise reduction at edges (such as nodule boundaries) and IR's loss of resolution at low dose levels. </p><p>The last part of this dissertation mathematically predicted IR performance in volume quantification tasks with an e' model that was extended in three respects, including the task modeling, the segmentation software modeling, and the characterizations of noise and resolution properties. Results showed that the extended e' model correlated with experimental precision across a range of image acquisition protocols, nodule sizes, and segmentation software. In addition, compared to experimental assessments of quantification performance, e' was significantly reduced in computational time, such that it can be easily employed in clinical studies to verify quantitative compliance and to optimize clinical protocols for CT volumetry.</p><p>The research in this dissertation has two important clinical implications. First, because d' values reflect the percent of detection accuracy and e' values reflect the quantification precision, this work provides a framework for evaluating IR with diagnostic accuracy as the endpoint. Second, because the calculations of d' and e' models are much more efficient compared to conventional observer studies, the clinical protocols with IR can be optimized in a timely fashion, and the compliance of clinical performance can be examined routinely.</p> / Dissertation
|
6 |
Startle Distinguishes Task ExpertiseJanuary 2018 (has links)
abstract: Recently, it was demonstrated that startle-evoked-movements (SEMs) are present during individuated finger movements (index finger abduction), but only following intense training. This demonstrates that changes in motor planning, which occur through training (motor learning - a characteristic which can provide researchers and clinicians with information about overall rehabilitative effectiveness), can be analyzed with SEM. The objective here was to determine if SEM is a sensitive enough tool for differentiating expertise (task solidification) in a common everyday task (typing). If proven to be true, SEM may then be useful during rehabilitation for time-stamping when task-specific expertise has occurred, and possibly even when the sufficient dosage of motor training (although not tested here) has been delivered following impairment. It was hypothesized that SEM would be present for all fingers of an expert population, but no fingers of a non-expert population. A total of 9 expert (75.2 ± 9.8 WPM) and 8 non-expert typists, (41.6 ± 8.2 WPM) with right handed dominance and with no previous neurological or current upper extremity impairment were evaluated. SEM was robustly present (all p < 0.05) in all fingers of the experts (except the middle) and absent in all fingers of non-experts except the little (although less robust). Taken together, these results indicate that SEM is a measurable behavioral indicator of motor learning and that it is sensitive to task expertise, opening it for potential clinical utility. / Dissertation/Thesis / Masters Thesis Biomedical Engineering 2018
|
7 |
Improving Driving Ability After Stroke : A scoping review of interventions within occupational therapyBacke, Karoline January 2022 (has links)
Stroke is a leading cause of disability in the world and cognitive impairments post stroke are common. Driving is an occupation of great importance to many individuals and enables participation in society but due to cognition deficits after stroke it can be a difficult task to perform adequately. The aim of this study was to review and map interventions used to improve driving ability after stroke within occupational therapy practice. A literature search was conducted using Arksey and O’Malley's six-stage framework [1], and a search was made in four different databases. Seven articles were found and used for further analysing. Results showed two main categories of interventions. Task-specific training consisting of either simulator-based training or behind the wheel training in real traffic, and training of raw cognitive functions focused on driving related abilities. Both interventions overall showed improvement of driving ability, with task specific training being somewhat superior. Considering the easy implementation possibilities, cognitive training with specific focus on driving skills could be used in current occupational therapy practices. Larger studies might prove task-specific training to be much more superior which can then motivate more simulator-based intervention possibilities. Future studies could also focus on improving self-awareness as a factor.
|
8 |
Bridging The Gap Between Autonomous Skill Learning And Task-Specific PlanningSen, Shiraj 01 February 2013 (has links)
Skill acquisition and task specific planning are essential components of any robot system, yet they have long been studied in isolation. This, I contend, is due to the lack of a common representational framework. I present a holistic approach to planning robot behavior, using previously acquired skills to represent control knowledge (and objects) directly, and to use this background knowledge to build plans in the space of control actions.
Actions in this framework are closed-loop controllers constructed from combinations of sensors, effectors, and potential functions. I will show how robots can use reinforcement learning techniques to acquire sensorimotor programs. The agent then builds a functional model of its interactions with the world as distributions over the acquired skills. In addition, I present two planning algorithms that can reason about a task using the functional models. These algorithms are then applied to a variety of tasks such as object recognition and object manipulation to achieve its objective on two different robot platforms.
|
9 |
Task-specific summarization of networks: Optimization and LearningEkhtiari Amiri, Sorour 11 June 2019 (has links)
Networks (also known as graphs) are everywhere. People-contact networks, social networks, email communication networks, internet networks (among others) are examples of graphs in our daily life. The increasing size of these networks makes it harder to understand them. Instead, summarizing these graphs can reveal key patterns and also help in sensemaking as well as accelerating existing graph algorithms. Intuitively, different summarizes are desired for different purposes. For example, to stop viral infections, one may want to find an effective policy to immunize people in a people-contact network. In this case, a high-quality network summary should highlight roughly structurally important nodes. Others may want to detect communities in the same people-contact network, and hence, the summary should show cohesive groups of nodes. This implies that for each task, we should design a specific method to reveal related patterns. Despite the importance of task-specific summarization, there has not been much work in this area.
Hence, in this thesis, we design task-specific summarization frameworks for univariate and multivariate networks. We start with optimization-based approaches to summarize graphs for a particular task and finally propose general frameworks which automatically learn how to summarize for a given task and generalize it to similar networks.
1. Optimization-based approaches: Given a large network and a task, we propose summarization algorithms to highlight specific characteristics of the graph (i.e., structure, attributes, labels, dynamics) with respect to the task. We develop effective and efficient algorithms for various tasks such as content-aware influence maximization and time segmentation. In addition, we study many real-world networks and their summary graphs such as people-contact, news-blogs, etc. and visualize them to make sense of their characteristics given the input task.
2. Learning-based approaches: As our next step, we propose a unified framework which learns the process of summarization itself for a given task. First, we design a generalizable algorithm to learn to summarize graphs for a set of graph optimization problems. Next, we go further and add sparse human feedback to the learning process for the given optimization task.
To the best of our knowledge, we are the first to systematically bring the necessity of considering the given task to the forefront and emphasize the importance of learning-based approaches in network summarization. Our models and frameworks lead to meaningful discoveries. We also solve problems from various domains such as epidemiology, marketing, social media, cybersecurity, and interactive visualization. / Doctor of Philosophy / Networks (also known as graphs) are everywhere. People-contact networks, social networks, email communication networks, internet networks (among others) are examples of graphs in our daily life. The increasing size of these networks makes it harder to understand them. Instead, summarizing these graphs can reveal key information and also help in sensemaking as well as accelerating existing graph analysis methods. Intuitively, different summarizes are desired for different purposes. For example, to stop viral infections, one may want to find an effective policy to immunize people in a people-contact network. In this case, a high-quality network summary should highlight roughly important nodes. Others may want to detect friendship communities in the same people-contact network, and hence, the summary should show cohesive groups of nodes. This implies that for each task, we should design a specific method to reveal related patterns. Despite the importance of task-specific summarization, there has not been much work in this area.
Hence, in this thesis, we design task-specific summarization frameworks for various type of networks with different approaches. To the best of our knowledge, we are the first to systematically bring the necessity of considering the given task to the forefront and emphasize the importance of learning-based approaches in network summarization. Our models and frameworks lead to meaningful discoveries. We also solve problems from various domains such as epidemiology, marketing, social media, cybersecurity, and interactive visualization.
|
10 |
A Distributed Approach to Crawl Domain Specific Hidden WebDesai, Lovekeshkumar 03 August 2007 (has links)
A large amount of on-line information resides on the invisible web - web pages generated dynamically from databases and other data sources hidden from current crawlers which retrieve content only from the publicly indexable Web. Specially, they ignore the tremendous amount of high quality content "hidden" behind search forms, and pages that require authorization or prior registration in large searchable electronic databases. To extracting data from the hidden web, it is necessary to find the search forms and fill them with appropriate information to retrieve maximum relevant information. To fulfill the complex challenges that arise when attempting to search hidden web i.e. lots of analysis of search forms as well as retrieved information also, it becomes eminent to design and implement a distributed web crawler that runs on a network of workstations to extract data from hidden web. We describe the software architecture of the distributed and scalable system and also present a number of novel techniques that went into its design and implementation to extract maximum relevant data from hidden web for achieving high performance.
|
Page generated in 0.0564 seconds