• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 56
  • 56
  • 56
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The standard plenoptic camera : applications of a geometrical light field model

Hahne, Christopher January 2016 (has links)
The plenoptic camera is an emerging technology in computer vision able to capture a light field image from a single exposure which allows a computational change of the perspective view just as the optical focus, known as refocusing. Until now there was no general method to pinpoint object planes that have been brought to focus or stereo baselines of perspective views posed by a plenoptic camera. Previous research has presented simplified ray models to prove the concept of refocusing and to enhance image and depth map qualities, but lacked promising distance estimates and an efficient refocusing hardware implementation. In this thesis, a pair of light rays is treated as a system of linear functions whose solution yields ray intersections indicating distances to refocused object planes or positions of virtual cameras that project perspective views. A refocusing image synthesis is derived from the proposed ray model and further developed to an array of switch-controlled semi-systolic FIR convolution filters. Their real-time performance is verified through simulation and implementation by means of an FPGA using VHDL programming. A series of experiments is carried out with different lenses and focus settings, where prediction results are compared with those of a real ray simulation tool and processed light field photographs for which a blur metric has been considered. Predictions accurately match measurements in light field photographs and signify deviations of less than 0.35 % in real ray simulation. A benchmark assessment of the proposed refocusing hardware implementation suggests a computation time speed-up of 99.91 % in comparison with a state-of-the-art technique. It is expected that this research supports in the prototyping stage of plenoptic cameras and microscopes as it helps specifying depth sampling planes, thus localising objects and provides a power-efficient refocusing hardware design for full-video applications as in broadcasting or motion picture arts.
52

Strategic framework to minimise information security risks in the UAE

Alkaabi, Ahmed January 2014 (has links)
The transition process to ICT (Information and Communication Technology) has had significant influence on different aspects of society. Although the computerisation process has motivated the alignment of different technical and human factors with the expansion process, the technical pace of the transition surpasses the human adaptation to change. Much research on ICT development has shown that ICT security is essentially a political and a managerial act that must not disregard the importance of the relevant cultural characteristics of a society. Information sharing is a necessary action in society to exchange knowledge and to enable and facilitate communication. However, certain information should be shared only with selected parties or even kept private. Information sharing by humans forms the main obstacle to security measure undertaken by organisations to protect their assets. Moreover, certain cultural traits play a major role in thwarting information security measures. Arab culture of the United Arab Emirates is one of those cultures with strong collectivism featuring strong ties among individuals. Sharing sensitive information including passwords of online accounts can be found in some settings in some cultures, but with reason and generally on a small scale. However, this research includes a study on 3 main Gulf Cooperation Council (GCC) countries, namely, Saudi Arabia (KSA), United Arab Emirates (UAE) and Oman, showing that there is similar a significant level of sensitive information sharing among employees in the region. This is proven to highly contribute to compromising user digital authentication, eventually, putting users’ accounts at risk. The research continued by carrying out a comparison between the United Kingdom (UK) and the Gulf Cooperation Council (GCC) countries in terms of attitudes and behaviour towards information sharing. It was evident that there is a significant difference between GCC Arab culture and the UK culture in terms of information sharing. Respondents from the GCC countries were more inclined to share sensitive information with their families and friends than the UK respondents were. However, UK respondents still revealed behaviour in some contexts, which may lead potential threats to the authentication mechanism and consequently to other digital accounts that require a credential pass. It was shown that the lack of awareness and the cultural impact are the main issues for sensitive information sharing among family members and friends in the GCC. The research hence investigated channels and measures of reducing the prevalence of social engineering attacks, such as legislative measures, technological measures, and education and awareness. The found out that cultural change is necessary to remedy sensitive information sharing as a cultural trait. Education and awareness are perhaps the best defence to cultural change and should be designed effectively. Accordingly, the work critically analysed three national cybersecurity strategies of the United Kingdom (UK), the United States (U.S.) and Australia (AUS) in order to identify any information security awareness education designed to educate online users about the risk of sharing sensitive information including passwords. The analysis aimed to assess possible adoption of certain elements, if any, of these strategies by the UAE. The strategies discussed only user awareness to reduce information sharing. However, awareness in itself may not achieve the required result of reducing information sharing among family members and friends. Rather, computer users should be educated about the risks of such behaviour in order to realise and change. As a result, the research conducted an intervention study that proposed a UAE-focused strategy designed to promote information security education for the younger generation to mitigate the risk of sensitive information sharing. The results obtained from the intervention study of school children formed a basis for the information security education framework also proposed in this work.
53

Harmonised shape grammar in design practice

Kunkhet, Arus January 2015 (has links)
The aim of this thesis is to address the contextual and harmony issues in shape grammar (SG) by applying knowledge from the field of natural language processing (NLP). Currently shape grammars are designed for static models (Ilčík et al., 2010), limited domain (Chau et al., 2004), time-consuming process (Halatsch, 2008), high user skills (Lee and Tang, 2009), and cannot guarantee aesthetic results (Huang et al., 2009). The current approaches to shape grammar produce infinite design and often meaningless shapes. This thesis addresses this problem by proposing a harmonised shape grammar framework which involves applying five levels of analysis namely morphological, lexical, syntactic, semantic, and pragmatic levels to enhance the overall design process. In satisfying these semantically well-formed and pragmatically well-formed shapes, the generated shapes can be contextual and harmonious. The semantic analysis level focuses on the character’s anatomy, body function, and habitat in order to produce meaningful design whereas the pragmatic level achieves harmony in design by selecting relevant character’s attributes, characteristics, and behaviour. In order to test the framework, this research applies the five natural language processing levels to a set of 3D humanoid characters. To validate this framework, a set of criteria related to aesthetic requisites has been applied to generate humanoid characters; these include the principles of design (i.e. contrast, emphasis, balance, unity, pattern, and rhythm) and aspects of human perception in design (i.e. visceral, behavioural and reflective). The framework has ensured that the interrelationships between each design part are mutually beneficial and all elements of the humanoid characters are combined to accentuate their similarities and bind the picture parts into a whole.
54

Hard synchronous real-time communication with the time-token MAC protocol

Wang, Jun January 2009 (has links)
The timely delivery of inter-task real-time messages over a communication network is the key to successfully developing distributed real-time computer systems. These systems are rapidly developed and increasingly used in many areas such as automation industry. This work concentrates on the timed-token Medium Access Control (MAC) protocol, which is one of the most suitable candidates to support real-time communication due to its inherent timing property of bounded medium access time. The support of real-time communication with the timed-token MAC protocol has been studied using a rigorous mathematical analysis. Specifically, to guarantee the deadlines of synchronous messages (real-time messages defined in the timed-token MAC protocol), a novel and practical approach is developed for allocating synchronous bandwidth to a general message set with the minimum deadline (Dmin) larger than the Target Token Rotation Time (TTRT). Synchronous bandwidth is defined as the maximum time for which a node can transmit its synchronous messages every time it receives the token. It is a sensitive paramater in the control of synchronous message transmission and must be properly allocated to individual nodes to guarantee deadlines of real-time messages. Other issues related to the schedulability test, including the required buffer size and the Worst Case Achievable Utilisation (WCAU) of the proposed approach, are then discussed. Simulations and numerical examples demonstrate that this novel approach performs better than any previously published local synchronous bandwidth allocation (SBA) schemes, in terms of its ability to guarantee the real-time traffic. A proper selection of the TTRT, which can maximise the WCAU of the proposed SBA scheme, is addressed. The work presented in this thesis is compatible with any network standard where timed-token MAC protocol is employed and therefore can be applied by engineers building real-time systems using these standards.
55

Early screening and diagnosis of diabetic retinopathy

Leontidis, Georgios January 2016 (has links)
Diabetic retinopathy (DR) is a chronic, progressive and possibly vision-threatening eye disease. Early detection and diagnosis of DR, prior to the development of any lesions, is paramount for more efficiently dealing with it and managing its consequences. This thesis investigates and proposes a number of candidate geometric and haemodynamic biomarkers, derived from fundus images of the retinal vasculature, which can be reliably utilised for identifying the progression from diabetes to DR. Numerous studies exist in literature that investigate only some of these biomarkers in independent normal, diabetic and DR cohorts. However, none exist, to the best of my knowledge, that investigates more than 100 biomarkers altogether, both geometric and haemodynamic ones, for identifying the progression to DR, by also using a novel experimental design, where the same exact matched junctions and subjects are evaluated in a four year period that includes the last three years pre-DR (still diabetic eye) and the onset of DR (progressors’ group). Multiple additional conventional experimental designs, such as non-matched junctions, non-progressors’ group, and a combination of them are also adopted in order to present the superiority of this type of analysis for retinal features. Therefore, this thesis aims to present a complete framework and some novel knowledge, based on statistical analysis, feature selection processes and classification models, so as to provide robust, rigorous and meaningful statistical inferences, alongside efficient feature subsets that can identify the stages of the progression. In addition, a new and improved method for more accurately summarising the calibres of the retinal vessel trunks is also presented. The first original contribution of this thesis is that a series of haemodynamic features (blood flow rate, blood flow velocity, etc.), which are estimated from the retinal vascular geometry based on some boundary conditions, are applied to studying the progression from diabetes to DR. These features are found to undoubtedly contribute to the inferences and the understanding of the progression, yielding significant results, mainly for the venular network. The second major contribution is the proposed framework and the experimental design for more accurately and efficiently studying and quantifying the vascular alterations that occur during the progression to DR and that can be safely attributed only to this progression. The combination of the framework and the experimental design lead to more sound and concrete inferences, providing a set of features, such as the central retinal artery and vein equivalent, fractal dimension, blood flow rate, etc., that are indeed biomarkers of progression to DR. The third major contribution of this work is the new and improved method for more accurately summarising the calibre of an arterial or venular trunk, with a direct application to estimating the central retinal artery equivalent (CRAE), the central retinal vein equivalent (CRVE) and their quotient, the arteriovenous ratio (AVR). Finally, the improved method is shown to truly make a notable difference in the estimations, when compared to the established alternative method in literature, with an improvement between 0.24% and 0.49% in terms of the mean absolute percentage error and 0.013 in the area under the curve. I have demonstrated that some thoroughly planned experimental studies based on a comprehensive framework, which combines image processing algorithms, statistical and classification models, feature selection processes, and robust haemodynamic and geometric features, extracted from the retinal vasculature (as a whole and from specific areas of interest), provide altogether succinct evidence that the early detection of the progression from diabetes to DR can be indeed achieved. The performance that the eight different classification combinations achieved in terms of the area under the curve varied from 0.745 to 0.968.
56

An interoperability framework for security policy languages

Aryanpour, Amir January 2015 (has links)
Security policies are widely used across the IT industry in order to secure environments. Firewalls, routers, enterprise application or even operating systems like Windows and Unix are all using security policies to some extent in order to secure certain components. In order to automate enforcement of security policies, security policy languages have been introduced. Security policy languages that are classified as computer software, like many other programming languages have been revolutionised during the last decade. A number of security policy languages have been introduced in the industry in order to tackle a specific business requirements. Not to mention each of these security policy languages themselves evolved and enhanced during the last few years. Having said that, a quick research on security policy languages shows that the industry suffers from the lack of a framework for security policy languages. Such a framework would facilitate the management of security policies from an abstract point. In order to achieve that specific goal, the framework utilises an abstract security policy language that is independent of existing security policy languages yet capable of expressing policies written in those languages. Usage of interoperability framework for security policy languages as described above comes with major benefits that are categorised into two levels: short and long-term benefits. In short-term, industry and in particular multi-dimensional organisations that make use of multiple domains for different purposes would lower their security related costs by managing their security policies that are stretched across their environment and often managed locally. In the long term, usage of abstract security policy language that is independent of any existing security policy languages, gradually paves the way for standardising security policy languages. A goal that seems unreachable at this moment of time. Taking the above facts into account, the aim of this research is to introduce and develop a novel framework for security policy languages. Using such a framework would allow multi-dimensional organisations to use an abstract policy language to orchestrate all security policies from a single point, which could then be propagated across their environment. In addition, using such a framework would help security administrators to learn and use only one single, common abstract language to describe and model their environment(s).

Page generated in 0.0574 seconds