Spelling suggestions: "subject:"atemsystem design engineering"" "subject:"atemsystem design ingineering""
61 |
Modeling and Analysis of Location Service Management in Vehicular Ad Hoc NetworksSaleet, Hanan January 2010 (has links)
Recent technological advances in wireless communication and the pervasiveness of various wireless communication devices have offered novel and promising solutions to enable vehicles to communicate with each other, establishing a decentralized communication system. An emerging solution in this area is the Vehicular Ad Hoc Networks (VANETs), in which vehicles cooperate in receiving and delivering messages to each other. VANETs can provide a viable alternative in situations where existing infrastructure communication systems become overloaded, fail (due for instance to natural disaster), or inconvenient to use. Nevertheless, the success of VANETs revolves around a number of key elements, an important one of which is the way messages are routed between sources and destinations. Without an effective message routing strategy VANETs' success will continue to be limited.
In order for messages to be routed to a destination effectively, the location of the destination must be determined. Since vehicles move in relatively fast and in a random manner, determining the location (hence the optimal message routing path) of (to) the destination vehicle constitutes a major challenge. Recent approaches for tackling this challenge have resulted in a number of Location Service Management Protocols. Though these protocols have demonstrated good potential, they still suffer from a number of impediments, including, signaling volume (particularly in large scale VANETs), inability to deal with network voids and inability to leverage locality for communication between the network nodes.
In this thesis, a Region-based Location Service Management Protocol (RLSMP) is proposed. The protocol is a self-organizing framework that uses message aggregation and geographical clustering to minimize the volume of signalling overhead. To the best of my knowledge, RLSMP is the first protocol that uses message aggregation in both updating and querying, and as such it promises scalability, locality awareness, and fault tolerance.
Location service management further addresses the issue of routing location updating and querying messages. Updating and querying messages should be exchanged between the network nodes and the location servers with minimum delay. This necessity introduces a persuasive need to support Quality of Service (QoS) routing in VANETs. To mitigate the QoS routing challenge in VANETs, the thesis proposes an Adaptive Message Routing (AMR) protocol that utilizes the network's local topology information in order to find the route with minimum end-to-end delay, while maintaining the required thresholds for connectivity probability and hop count. The QoS routing problem is formulated as a constrained optimization problem for which a genetic algorithm is proposed. The thesis presents experiments to validate the proposed protocol and test its performance under various network conditions.
|
62 |
Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillanceTung, Frederick January 2010 (has links)
Video surveillance systems are playing an increasing role in preventing and investigating crime, protecting public safety, and safeguarding national security. In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors like human fatigue and boredom.
The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features, this thesis improves a recently introduced approach that makes use of higher-level features of intentionality. Individuals in a scene are modelled as intentional agents instead of simply objects. Unusual behaviour detection then becomes a task of determining whether an agent's trajectory is explicable in terms of learned spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach.
|
63 |
Weighted Opposition-Based Fuzzy ThresholdingEnsafi, Pegah January 2011 (has links)
With the rapid growth of the digital imaging, image processing techniques are widely involved in many industrial and medical applications. Image thresholding plays an essential role in image processing and computer vision applications. It has a vast domain of usage. Areas such document image analysis, scene or map processing, satellite imaging and material inspection in quality control tasks are examples of applications that employ image thresholding or segmentation to extract useful information from images.
Medical image processing is another area that has extensively used image thresholding to help the experts to better interpret digital images for a more accurate diagnosis or to plan treatment procedures.
Opposition-based computing, on the other hand, is a recently introduced model that can be employed to improve the performance of existing techniques. In this thesis, the idea of oppositional thresholding is explored to introduce new and better thresholding techniques.
A recent method, called Opposite Fuzzy Thresholding (OFT), has involved fuzzy sets with opposition idea, and based on some preliminary experiments seems to be reasonably successful in thresholding some medical images.
In this thesis, a Weighted Opposite Fuzzy Thresholding method (WOFT) will be presented that produces more accurate and reliable results compared to the parent algorithm. This claim has been verified with some experimental trials using both synthetic and real world images.
Experimental evaluations were conducted on two sets of synthetic and medical images to validate the robustness of the proposed method in improving the accuracy of the thresholding process when fuzzy and oppositional ideas are combined.
|
64 |
Designing Discoverable Digital Tabletop Menus for Public SettingsSeto, Amanda Mindy January 2012 (has links)
Ease of use with digital tabletops in public settings is contingent on how well the system invites and guides interaction. The same can be said for the interface design and individual graphical user interface elements of these systems. One such interface element is menus. Prior to a menu being used however, it must first be discovered within the interface. Existing research pertaining to digital tabletop menu design does not address this issue of discovering or opening a menu. This thesis investigates how the interface and interaction of digital tabletops can be designed to encourage menu discoverability in the context of public settings.
A set of menu invocation designs varying on the invocation element and use of animation are proposed. These designs are then evaluated through an observational study at a museum to observe users interactions in a realistic public setting. Findings from this study propose the use of discernible and recognizable interface elements – buttons – supported by the use of animation to attract and guide users as a discoverable menu invocation design. Additionally, findings posit that when engaging with a public digital tabletop display, users transition through exploration and discovery states before becoming competent with the system. Finally, insights from this study point to a set of design recommendations for improving menu discoverability.
|
65 |
Optical Coherence Tomography Image Analysis of Corneal TissueZaboli, Shiva January 2011 (has links)
Because of the ubiquitous use of contact lenses, there is considerable interest in better understanding the anatomy of the cornea, the part of the eye in contact with an exterior lens. The recent technology developments in high resolution Optical Coherence Tomography (OCT) devices allows for the in-vivo observation of the structure of the human cornea in 3D and at cellular level resolution.
Prolonged wear of contact lenses, inflammations, scarring and diseases can change the structure and physiology of the human cornea. OCT is capable of in-vivo, non-contact, 3D imaging of the human cornea. In this research, novel image processing algorithms were developed to process OCT images of the human cornea, in order to determine the corneal optical scattering and transmission. The algorithms were applied to OCT data sets acquired from multiple subjects before, during and after prolonged (3 hours) wear of soft contact lenses and eye patches, in order to investigate the changes in the corneal scattering associated with hypoxia. Results from this study demonstrate the ability of OCT to measure the optical scattering of corneal tissue and to monitor its changes resulting from external stress (hypoxia).
|
66 |
Documenting & Using Cognitive Complexity Mitigation Strategies (CCMS) to Improve the Efficiency of Cross-Context User TransfersBhagat, Rahul January 2011 (has links)
Cognitive complexity mitigation strategies are methods and approaches utilized by users to reduce the apparent complexity of problems thus making them easier to solve. These strategies often effective because they mitigate the limitations of human working memory and attention resources. Such cognitive complexity mitigation strategies are used throughout the design, development and operational processes of complex systems. Thus, a better understanding of these strategies, and methods that leverage them, can help improve the efficiency of such processes.
Additionally, changes in the use of these strategies across various environments can identify cognitive differences in operating and developing across these contexts. This knowledge can help improve the effectiveness of cross-context user transfers by suggesting change management processes that incorporate the degree of cognitive difference across contexts.
In order to document cognitive complexity mitigation strategies and the change in their usage, two application domains are studied. Firstly, cognitive complexity mitigation strategies used by designers during the engineering design process are found through an ethnographic immersion with a participating engineering firm, followed by an analysis of the designer's logbooks and validation interviews with the designers. Results include identification of five strategies used by the designers to mitigate design complexity. These strategies include Blackbox Modeling, Whitebox Modeling, Decomposition, Visualization and Prioritized Lists. The five complexity mitigation strategies are probed further across a larger sample of engineering designers and the usage frequency of these strategies is assessed across commonly performed engineering design activities which include the Selection, Configuration and Parametric activities. The results indicate the preferred use of certain strategies based on the engineering activity being performed. Such preferential usage of complexity mitigation strategies is also assessed with regards to Original and Redesign projects types. However, there is no indication of biased strategy usage across these two project characterizations. These results are an example of a usage-frequency based difference analysis; such analyses help identify the strategies that experience increased or reduced usage when transferring across activities.
In contrast to the first application domain, which captures changes in how often strategies are used across contexts, the second application domain is a method of assessing differences based on how a specific strategy is used differently across contexts. This alternative method is developed through a project that aims to optimize the transfer of air traffic controllers across different airspace sectors. The method uses a previously researched complexity mitigation strategy, knows as a structure based abstraction, to develop a difference analysis tool called the Sector Abstraction Binder. This tool is used to perform cognitive difference analyses between air traffic control sectors by leveraging characteristic variations in how structure based abstractions are applied across different sectors. This Sector Abstraction Binder is applied to two high-level airspace sectors to demonstrate the utility of such a method.
|
67 |
Touch and Emotion in Haptic and Product DesignLee, Bertina 18 April 2012 (has links)
The emotional experience of products can have enormous impact on the overall product experience: someone who is feeling positive is more likely to be accepting of novel products or to be more tolerant of unexpected or unusual interface behaviours. Being able to improve users’ emotions through product interaction has clear benefits and is currently the focus of designers all over the world.
The extent to which touch-based information can affect a user’s experience and observable behaviour has been given relatively little attention in haptic technology or other touch-based products where research has tended to focus on psychophysics relating to technical development, in the case of the former, and usability in the case of the latter. The objective of this research was therefore to begin to explore generalizable and useful relationship(s) between design parameters specific to the sense of touch and the emotional response to tactile experiences. To this end, a theoretical ’touch-emotion model’ was developed that incorporates stages from existing information and emotion processing models, and a subset of pathways (the ‘Affective’, ‘Cognitive’, and ‘Behaviour Pathways’) was explored.
Four experiments were performed to examine how changes in various touch factors, such as surface roughness and availability of haptic (that is, touch-based) information during exploration, impacted user emotional experience and behaviour in the context of the model’s framework. These experiments also manipulated factors related to the experience of touch in real-world situations, such as the availability of visual information and product context.
Exploration of the different pathways of the touch-emotion model guided the analysis of the experiments. In exploring the Affective Pathway, a robust relationship was found between increasing roughness and decreasing emotional valence (n = 36, p < 0.005), regardless of the availability of haptic or visual information. This finding expands earlier research that focused on the effect of tactile stimuli on user preference. The impact of texture on the Cognitive Pathway was examined by priming participants to think of the stimuli as objects varying in emotional commitment, such as a common mug (lower) or a personal cell phone (higher). Emotional response again decreased as roughness increased, regardless of primed context (n = 27, p < 0.002) and the primed contexts marginally appeared to generally improve or reduce emotional response (n = 27, p < 0.08). Finally, the exploration of
the Behaviour Pathway considered the ability of roughness-evoked emotion to act as a mediator between physical stimuli and observable behaviour, revealing that, contrary to the hypothesis that increased emotional valence would increase time spent reflecting on the stimuli, increased emotion magnitude (regardless of the positive or negative valence of the emotion) was associated with increased time spent in reflection (n = 33, p < 0.002). Results relating to the Behaviour Pathway suggested that the portion of the touch-emotion model that included the last stages of information processing, observable behaviour, may need to be revised. However, the insights of the Affective and Cognitive Pathway analyses are consistent
with the information processing stages within those pathways and give support to the related portions of the touch-emotion model.
The analysis of demographics data collected from all four experiments also revealed interesting findings which are anticipated to have application in customizing haptic technology for individual users. For example, correlations were found between self-reported tactual importance (measured with a questionnaire) and age (n = 79, r = 0.28, p < 0.03) and between self-reported tactual importance and sensitivity to increased roughness (n = 79, r = -0.27, p < 0.04). Higher response times were also observed with increased age (rIT = 0.49, rRT = 0.48; p < 0.01).
This research contributes to the understanding of how emotion and emotionevoked behaviour may be impacted by changing touch factors using the exemplar of roughness as the touch factor of interest, experienced multimodally and in varying situations. If a design goal is to contribute to user emotional experience of a product, then the findings of this work have the potential to impact design decisions relating to surface texture components of hand-held products as well as for virtual surface textures generated by haptic technology. Further, the touchemotion model may provide a guide for the systematic exploration of the relationships between surface texture, cognitive processing, and emotional response.
|
68 |
Nonparametric Neighbourhood Based Multiscale Model for Image Analysis and UnderstandingJain, Aanchal 24 August 2012 (has links)
Image processing applications such as image denoising, image segmentation, object detection, object recognition and texture synthesis often require a multi-scale analysis of images. This is useful because different features in the image become prominent at different scales. Traditional imaging models, which have been used for multi-scale analysis of images, have several limitations such as high sensitivity to noise and structural degradation observed at higher scales. Parametric models make certain assumptions about the image structure which may or may not be valid in several situations. Non-parametric methods,
on the other hand, are very flexible and adapt to the underlying image structure more easily. It is highly desirable to have effi cient non-parametric models for image analysis, which can be used to build robust image processing algorithms with little or no prior knowledge of the underlying image content. In this thesis, we propose a non-parametric pixel neighbourhood based framework for multi-scale image analysis and apply the model to build image denoising and saliency detection algorithms for the purpose of illustration. It has
been shown that the algorithms based on this framework give competitive results without
using any prior information about the image statistics.
|
69 |
Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillanceTung, Frederick January 2010 (has links)
Video surveillance systems are playing an increasing role in preventing and investigating crime, protecting public safety, and safeguarding national security. In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors like human fatigue and boredom.
The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features, this thesis improves a recently introduced approach that makes use of higher-level features of intentionality. Individuals in a scene are modelled as intentional agents instead of simply objects. Unusual behaviour detection then becomes a task of determining whether an agent's trajectory is explicable in terms of learned spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach.
|
70 |
High-Level Intuitive Features (HLIFs) for Melanoma DetectionAmelard, Robert January 2013 (has links)
Feature extraction of segmented skin lesions is a pivotal step for implementing accurate decision support systems. Existing feature sets combine many ad-hoc calculations and are unable to easily provide intuitive diagnostic reasoning. This thesis presents the design and evaluation of a set of features for objectively detecting melanoma in an intuitive and accurate manner. We call these "high-level intuitive features" (HLIFs).
The current clinical standard for detecting melanoma, the deadliest form of skin cancer, is visual inspection of the skin's surface. A widely adopted rule for detecting melanoma is the "ABCD" rule, whereby the doctor identifies the presence of Asymmetry, Border irregularity, Colour patterns, and Diameter. The adoption of specialized medical devices for this cause is extremely slow due to the added temporal and financial burden. Therefore, recent research efforts have focused on detection support systems that analyse images acquired with standard consumer-grade camera images of skin lesions. The central benefit of these systems is the provision of technology with low barriers to adoption. Recently proposed skin lesion feature sets have been large sets of low-level features attempting to model the widely adopted ABCD criteria of melanoma. These result in high-dimensional feature spaces, which are computationally expensive and sparse due to the lack of available clinical data. It is difficult to convey diagnostic rationale using these feature sets due to their inherent ad-hoc mathematical nature.
This thesis presents and applies a generic framework for designing HLIFs for decision support systems relying on intuitive observations. By definition, a HLIF is designed explicitly to model a human-observable characteristic such that the feature score can be intuited by the user. Thus, along with the classification label, visual rationale can be provided to further support the prediction. This thesis applies the HLIF framework to design 10 HLIFs for skin cancer detection, following the ABCD rule. That is, HLIFs modeling asymmetry, border irregularity, and colour patterns are presented.
This thesis evaluates the effectiveness of HLIFs in a standard classification setting. Using publicly-available images obtained in unconstrained environments, the set of HLIFs is compared with and against a recently published low-level feature set. Since the focus is on evaluating the features, illumination correction and manually-defined segmentations are used, along with a linear classification scheme. The promising results indicate that HLIFs capture more relevant information than low-level features, and that concatenating the HLIFs to the low-level feature set results in improved accuracy metrics. Visual intuitive information is provided to indicate the ability of providing intuitive diagnostic reasoning to the user.
|
Page generated in 0.0823 seconds