Spelling suggestions: "subject:"computer science anda bioinformatics"" "subject:"computer science ando bioinformatics""
71 |
Performance analysis of a mobile robotic tele-ultrasonography system over 2.5G/3G communication networksGarawi, Salem A. January 2006 (has links)
The concept of merging the state of the art, mobile and wireless communication technologies and health care is the research subject of this thesis. The emerging concept represents the evolution of M-health systems from traditional desktop 'telemedicine' platforms to wireless and mobile configurations. Current developments in wireless communications integrated with developments in pervasive and ultrasound monitoring technologies will have a radical impact on future healthcare delivery systems. The work in this thesis formed part of developing an end-to-end mobile robotic tele-ultrasonography system called (OTELO), that brings together the evolution of merging wireless communications and network technologies with the concept of 'connected healthcare' anytime and anywhere. OTELO system allows an Expert to examine a distant patient by remotely and virtually controlling robotic ultrasound probe, that produces ultrasound images transmitted to the Expert side in a real-time environment. The research objectives represent the performance analysis and validation of the system over both 2.5G and 3G networks. Real-time robotic tele-ultrasonography over the mobile networks is a challenging task in terms of reliable, delaysensitive and medically acceptable quality of service. The approaches made to fulfil the requirements for the functional modalities of the system, were based on the performance matrices of the system on both the simulated and real-network environments. These testing matrices were covering the performance of the wireless path, wired path and the end-to-end connectivity of the system, and can be summarised as the; compression ratio of the transmitted medical ultrasound images, data throughput, Latency, delay Jitter, Round Trip Time and Packet loss. The major part of the study concentrated on the asymmetry nature of the end-to-end data interaction, therefore the Uplink channel characteristics of the Patient station, were under comprehensive investigations on its feasibility for the system medical QoS over the both communication networks (2.5G and 3G). The research tasks were implemented over both simulated environment and on real operating network, and most of the data dealt with were real data acquired from the field. The achieved results were analyzed and furthermore comparative performances between simulated and real network were discussed and justified. The first approach made, was addressing the capability of the OPRS (2.5G) network and its limitations to perform real-time ultrasonography operation. That was an essential sub task investigation towards specific and deeper analysis on the performance of the system over the promising UMTS (3G) network, where the controlled ultrasound data transmission in real-time were investigated and the results thoroughly analyzed. The achieved results analysis of the subtasks mentioned, formed the bases to study the ultrasound transmission objectively, that fulfilling the medical QoS requirements when performing real-time tele-ultrasound medical session, these are precisely the image size, image quality and frame rate. To improve the medical QoS over relatively unreliable environments (wireless), a new adaptation technique for enhanced wireless ultrasound streaming was developed for OTELO environment and the performance results presented. The results of this research show the successful transmission of robotically acquired medical images and diagnostically acceptable quality medical video streams in 30 wireless network environment. It provides an important and essential knowledge on M-health systems, when close loop robot control, Delay sensitive and Real-time Telemedicine is required. Future work in this area is also presented for enhanced performance of this mobile robotic telemedical system especially for future use in 3.5G and 4G mobile environments.
|
72 |
Security for mobile ad-hoc networksPanaousis, Emmanouil A. January 2012 (has links)
Ad-hoc networks are crucial enablers of next generation communications. Such networks can be formed and reconfigured dynamically and they can be mobile, standalone or inter-networked with other networks. Mobile Ad-hoc NETworks (MANETs) are established by group of autonomous nodes that communicate with each other by establishing a multihop radio network and maintain connectivity in an infrastructureless manner. Security of the connections between devices and networks is crucial. Current MANET routing protocols inherently trust all participants being cooperative by nature and they depend on neighbouring nodes to route packets to a destination. Such a model allows malicious nods to potentially harm MANET communications links or reveal confidential data by launching different kind of attacks. The main objective of this thesis is to investigate and propose security mechanisms for MANET communications mainly emphasising on emergency scenarios where first responders' devices communicate by establishing a decentralised wireless network. To this end, we have proposed security mechanisms for innovtive routing and peer-to-peer overlay mechanisms for emergency MANETs proposed supplementarily to the findings of this thesis. Such security mechanisms guarntee confidentiality and integrity of the emergency MANET communications. We have also proposed novel ways of improving availability in MANETs in presence of intrusion detection systems by increasing the nodes' lifetime based on a novel game theoretic routing protocol for MANETs. We have thoroughly evaluated the performance of all the proposed mechanisms using a network simulator. The main objective of undertaking these evaluations was to guarantee that security introduces affordable overhead thereby respecting the Quality-of-Service of MANET communication links.
|
73 |
Accurate human pose tracking using efficient manifold searchingMoutzouris, Alexandros January 2013 (has links)
In this thesis we propose novel methods for accurate markerless 3D pose tracking. Training data are used to represent specific activities, using dimensionality reduction methods. The proposed methods attempt to keep the computational cost low, without sacrificing the accuracy of the final result. Also, we deal with the problem of stylistic variation between the motions seen in the training and the testing dataset. Solutions to address both single and multiple action scenarios are presented. Specifically, appropriate temporal non-linear dimensionality reduction methods are applied to learn compact manifolds that are suitable for fast exploration. Such manifolds are efficiently searched by a deterministic gradient-based method. In order to deal with stylistic differences of human actions, we represent human poses using multiple levels. Searching through multiple levels reduces the effect of being trapped in a local optimal and therefore leads to higher accuracy. An observation function controls the process to minimise the computational cost of the method. Finally, we propose a multi-activity pose tracking methods, which combines action recognition with single-action pose tracking. To achieve reliable online action recognition, the system is equipped with short memory. All methods are tested in publicly available datasets. Results demonstrate their high accuracy and relative low computational cost, in comparison to state-of-the-art methods.
|
74 |
Cognitive and adaptive routing framework for mobile ad-hoc networksRamrekha, Tipu Arvind January 2012 (has links)
In this thesis, we investigate the field of distributed multi-hopped routing in Mobile Ad Hoc Networks (MANETs). MANETs are suitable for autonomous communication in remote areas lacking infrastructures or in situations where destruction of existing infrastructures prevail. One such important communication service domain is in the field of Public Protection and Disaster Relief (PPDR) services where rescuers require high bandwidth mobile communications in an ad hoc fashion. The main objectives of this thesis is to investigate and propose a realistic framework for cognitive MANET routing that is able to adapt itself to the requirements of users while being constrained by the topological state. We propose to investigate the main proactive and reactive emerging standard MANET routing protocols at the Internet Engineering Task Force (IETF) and extend their functionalities to form a cognitive and adaptive routing approach. We thus propose a cognitive and adaptive routing framework that is better suited for diverse MANET scenarios than state-of-the art protocols mainly in terms of scalability. We also design our approach based on realistic assumptions and suitability for modern Android and iOS devices. In summary, we introduce the area of MANET routing and the state of the art in the field focussing on scalable routing approaches, derive QoS routing models for variable sized MANETs and validate these models using event based ns-2 simulations and analyse the scalable performance of current approaches. As a result we present and evaluate our novel converged cognitive and adaptive routing protocol called ChaMeLeon (CML) for PPDR scenarios. A realistic "Cognitive and Adaptive Module" is then presented that has been implemented in modern smart devices. Finally, we end the thesis with our conclusions and avenues for future work in the field.
|
75 |
Partial differential equations for medical image segmentationBagherinakhjavanlo, Bashir January 2014 (has links)
This study is concerned with image segmentation techniques using mathematical models based on elastic curves or surfaces defined within an image domain that can move under the influence of a defined energy. These active contour models use internal and external forces generated from curves or surfaces in 2D and 3D image data. The algorithms that measure these energies must cope with non-homogeneous objects and regions, low contrast boundaries and image noise. It investigates level sets, which employ an energy formulation defined by partial differential equations (PDEs), that are sensitive to weak boundaries yet are robust to noise whilst maintaining computational stability. The methodology is evaluated using medical imagery, which commonly suffer from high levels of noise, blur and exhibit weak boundaries between different types of adjacent tissue. An energy based on PDEs has been used to evolve an image contour from an initial guess using image forces derived from region properties to drive the search to locate the boundaries of the desired objects that includes the maximum and minimum curvature function to enable length shortening in the curve evolution. It is applied to both 2D and 3D CTA datasets for the segmentation of abdominal and thoracic aortic aneurysm (AAA&TAA). For some image data the methodology can be initialised automatically using a contour detected after intensity thresholding. Non-homogeneous regions require a manual initialisation that crosses the boundary between the aorta and thrombus. Sussman’s re-initialization has been used in the 3D algorithm to maintain stability in the evolving boundary, as a consequence of the re-formulation from the continuous to the discrete domain. A hybrid method is developed that combines a novel approach using region information (i.e. intensities inside and outside the object) and edge information, computed using a diffusion-based approach integrated into a level set formulation, to guide the initial curve to the object boundary by finding strong edges with local minima. Boundary information supports finding a local minimum length curve on evaluation and only examines data on the contour. Using Green’s theorem, region information is be used to address the boundary leakage problem, as it minimizes the energy related to the whole image data and the moving curve is stopped by strong gradients on the borders of objects. Finally, a Gabor filter has been integrated into the hybrid algorithm to enhance the image and support the detection of textured regions of interest. The method is evaluated on both synthetic and real image data and compared with the region-based methods of Chan-Vese and Li et al.
|
76 |
EBusiness analytics framework (EBAF) : to enable SMEs to gain business intelligence for competitive advantagePesaran Behbahani, Masoud January 2014 (has links)
Recent technological advances have resulted in increasingly larger databases. The fast and efficient useful analysis and interpretation of this data to improve business intelligence is critical to the success of all organisations. This thesis presents a new framework that utilises a new multilayer mining theory and is based on business intelligence methods, data mining techniques, online analytical processing (OLAP) and online transactional processing (OLTP). Existing decision making modelling approaches for executive information systems have three main shortcomings and limitations to different degrees: a) problems in accessing new types and new structures of data sources; b) failing to provide organizational insight and panorama; and c) generating excessive amount of trivial information. The hypothesis of this research is that a new proactive Multidimensional Multilayer Mining Management Model (5M) framework which is proposed in this thesis will overcome the shortcomings listed above. The 5M framework is made up of 6 components: (a) multilayer mining structures; (b) measurable objectives conversion models; (c) operational transaction databases; d) object-model data marts; (e) data cubes and (f) core analysis engine which analyse the multidimensional cubes, multilayer mining structures and the enterprise key performance indicators. The 5M framework was evaluated by developing an implementation of an instance of the framework called the Ebusiness Analytical Framework (EBAF). The 5M framework and the subsequent EBAF framework were built by carrying out action research and a case study in an ebusiness company where it was subject to implementation, reflection, adaptation and improvement in order to fulfil the requirements of the hypothesis and that of a real business. EBAF implemented all 6 components of the 5M framework using the Visual Studio environment and using various algorithms, tools and programming languages. The programming languages used included MDX, DMX, SQL, VB.Net and C#. Further empirical case studies can be carried out to evaluate the effectiveness and efficiency of the 5M framework.
|
77 |
An integrated software quality model and its adaptability within monolithic and virtualized cloud environmentsKiruthika, J. January 2014 (has links)
One fundamental problem in current software development life cycles, particularly in distributed and non-deterministic environment, is that software quality assurance and measurements do not start early enough in the development process. Recent research work has been trying to address this problem by using software quality assurance (SQA) measurement frameworks. However, before such frameworks are developed and adopted there is a need to have a clear understanding and to define what is meant by quality. To help this definition process, numerous approaches and quality models have been developed. Many of the early quality models have followed a hierarchical approach with little scope for expansion. More recent models have been developed that follow a 'Define your own' approach. Although an improvement, difficulties arise when comparing quality across projects, due to their tailored nature. The aim of this project is to develop a new generic framework to software quality assurance which addresses the problems of existing approaches. The proposed framework will blend various quality measurement approaches and will provide statistical, probabilistic and subjective measurements for both required and actual quality. Unlike existing techniques, autodidactic mechanisms are incorporated which can be used to measure any software entity type. This however should include the measurements of actual quality using software quality factors that are based on experimental measurements i.e., not only on the subjective view of stakeholders. Moreover the framework should also include the conversion into software measurements of historical reports/data that can be extracted from problem reporting systems such date of problem identification, source of report, critical tendencies of report, cause of problem etc. and other available statistical information. The proposed framework retains the knowledge about software defects and their impact on quality, and has the capacity to add new knowledge dynamically.
|
78 |
A framework for the classification and detection of design defects and software quality assuranceAllanqawi, Khaled Kh. S. Kh January 2015 (has links)
In current software development lifecyeles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The framework on design defects and software quality assurance (DESQA) will blend various design defect metrics and quality measurement approaches and will provide measurements for both defect and quality factors. Unlike existing frameworks, mechanisms are incorporated for the conversion of defect metrics into software quality measurements. The framework is evaluated using a research tool supported by sample used to complete the Design Defects Measuring Matrix, and data collection process. In addition, the evaluation using a case study aims to demonstrate the use of the framework on a number of designs and produces an overall picture regarding defects and quality. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfil the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
|
79 |
Improving accuracy of recommender systems through triadic closureTselenti, Panagiota January 2017 (has links)
The exponential growth of social media services led to the information overload problem which information filtering and recommender systems deal by exploiting various techniques. One popular technique for making recommendations is based on trust statements between users in a social network. Yet explicit trust statements are usually very sparse leading to the need for expanding the trust networks by inferring new trust relationships. Existing methods exploit the propagation property of trust to expand the existing trust networks; however, their performance is strongly affected by the density of the trust network. Nevertheless, the utilisation of existing trust networks can model the users’ relationships, enabling the inference of new connections. The current study advances the existing methods and techniques on developing a trust-based recommender system proposing a novel method to infer trust relationships and to achieve a fully-expanded trust network. In other words, the current study proposes a novel, effective and efficient approach to deal with the information overload by expanding existing trust networks so as to increase accuracy in recommendation systems. More specifically, this study proposes a novel method to infer trust relationships, called TriadicClosure. The method is based on the homophily phenomenon of social networks and, more specifically, on the triadic closure mechanism, which is a fundamental mechanism of link formation in social networks via which communities emerge naturally, especially when the network is very sparse. Additionally, a method called JaccardCoefficient is proposed to calculate the trust weight of the inferred relationships based on the Jaccard Cofficient similarity measure. Both the proposed methods exploit structural information of the trust graph to infer and calculate the trust value. Experimental results on real-world datasets demonstrate that the TriadicClosure method outperforms the existing state-of-the-art methods by substantially improving prediction accuracy and coverage of recommendations. Moreover, the method improves the performance of the examined state-of-the-art methods in terms of accuracy and coverage when combined with them. On the other hand, the JaccardCoefficient method for calculating the weight of the inferred trust relationships did not produce stable results, with the majority showing negative impact on the performance, for both accuracy and coverage.
|
80 |
An automatic machine-learning framework for testing service-oriented architecureAltalabani, Osama January 2014 (has links)
Today, Service Oriented Architecture (SOA) systems such as web services have the advantage of offering defined protocol and standard requirement specifications by means of a formal contract between the service requestor and the service provider, for example, the WSDL (Web Services Description Language) , PBEL (Business Process Execution Language), and BPMN (Business Process Model and Notation). This gives a high degree of flexibility to the design, development, Information Technology (IT) infrastructure implementation, and promise a world where computing resources work transparently and efficiently. Furthermore, the rich interface standards and specifications of SOA web services (collectively referred to as the WS-* Architecture) enable service providers and consumers to solve important problems, as these interfaces enable the development of interoperable computing environments that incorporate end-to-end security, reliability and transaction support, thus, promoting existing IT infrastructure investments. However, many of the benefits of SOA become challenges for testing approaches and frameworks due to their specific design and implementation characteristics, which cause many testability problems. Thus, a number of testing approaches and frameworks have been proposed in the literature to address various aspects of SOA testability. However, most of these approaches and frameworks are based on intuition and not carried out in a systematic manner that is based on the standards and specifications of SOA. Generally, they lack sophisticated and automated testing, which provide data mining and knowledge discovery in accordance with the system based on SOA requirements, which consequently would provide better testability, deeper intelligence and prudence. Thus, this thesis proposes an automated and systematic testing framework based on user requirements, both functional and non-functional, with support of machine-learning techniques for intelligent reliability, real-time monitoring, SOA protocols and standard requirements coverage analysis to improve the testability of SOA-based systems. This thesis addresses the development, implementation, and evaluation of the proposed framework, by means of a proof-of-concept prototype for testing SOA systems based on the web services protocol stack specifications. The framework extends to intelligent analysis of SOA web service specifications and the generation of test cases based on static test analysis using machine-learning support.
|
Page generated in 0.1514 seconds