Spelling suggestions: "subject:"computer science anda bioinformatics"" "subject:"computer science ando bioinformatics""
41 |
Automatic skeletonization and skin attachment for realistic character animationXie, Xin January 2009 (has links)
The realism of character animation is associated with a number of tasks ranging from modelling, skin defonnation, motion generation to rendering. In this research we are concerned with two of them: skeletonization and weight assignment for skin deformation. The fonner is to generate a skeleton, which is placed within the character model and links the motion data to the skin shape of the character. The latter assists the modelling of realistic skin shape when a character is in motion. In the current animation production practice, the task of skeletonization is primarily undertaken by hand, i.e. the animator produces an appropriate skeleton and binds it with the skin model of a character. This is inevitably very time-consuming and costs a lot of labour. In order to improve this issue, in this thesis we present an automatic skeletonization framework. It aims at producing high-quality animatible skeletons without heavy human involvement while allowing the animator to maintain the overall control of the process. In the literature, the tenn skeletonization can have different meanings. Most existing research on skeletonization is in the remit of CAD (Computer Aided Design). Although existing research is of significant reference value to animation, their downside is the skeleton generated is either not appropriate for the particular needs of animation, or the methods are computationally expensive. Although some purpose-build animation skeleton generation techniques exist, unfortunately they rely on complicated post-processing procedures, such as thinning and pruning, which again can be undesirable. The proposed skeletonization framework makes use of a new geometric entity known as the 3D silhouette that is an ordinary silhouette with its depth information recorded. We extract a curve skeleton from two 3D silhouettes of a character detected from its two perpendicular projections. The skeletal joints are identified by down sampling the curve skeleton, leading to the generation of the final animation skeleton. The efficiency and quality are major performance indicators in animation skeleton generation. Our framework achieves the former by providing a 2D solution to the 3D skeletonization problem. Reducing in dimensions brings much faster performances. Experiments and comparisons are carried out to demonstrate the computational simplicity. Its accuracy is also verified via these experiments and comparisons. To link a skeleton to the skin, accordingly we present a skin attachment framework aiming at automatic and reasonable weight distribution. It differs from the conventional algorithms in taking topological information into account during weight computation. An effective range is defined for a joint. Skin vertices located outside the effective range will not be affected by this joint. By this means, we provide a solution to remove the influence of a topologically distant, hence highly likely irrelevant joint on a vertex. A user-defined parameter is also provided in this algorithm, which allows different deformation effects to be obtained according to user's needs. Experiments and comparisons prove that the presented framework results in weight distribution of good quality. Thus it frees animators from tedious manual weight editing. Furthermore, it is flexible to be used with various deformation algorithms.
|
42 |
An evaluation of software modelling in practicePhalp, Keith T. January 1995 (has links)
No description available.
|
43 |
Enabling collaborative modelling for a multi-site model-driven software development approach for electronic control unitsGrimm, Frank January 2012 (has links)
An important aspect of support for distributed work is to enable users at different sites to work collaboratively, across different sites, even different countries but where they may be working on the same artefacts. Where the case is the design of software systems, design models need to be accessible by more than one modeller at a time allowing them to work independently from each other in what can be called a collaborative modelling process supporting parallel evolution. In addition, as such design is a largely creative process users are free to create layouts which appear to better depict their understanding of certain model elements presented in a diagram. That is, that the layout of the model brings meaning which exceed the simple structural or topological connections. However, tools for merging such models tend to do so from a purely structural perspective, thus losing an important aspect of the meaning which was intended to be conveyed by the modeller. This thesis presents a novel approach to model merging which allows the preservation of such layout meaning when merging. It first presents evidence from an industrial study which demonstrates how modellers use layout to convey meanings. An important finding of the study is that diagram layout conveys domain-specific meaning and is important for modellers. This thesis therefore demonstrates the importance of diagram layout in model-based software engineering. It then introduces an approach to merging which allows for the preservation of domain-specific meaning in diagrams of models, and finally describes a prototype tool and core aspects of its implementation.
|
44 |
Internet protocol - based information systems : an investigation into integration issues and iterative organisational change strategiesJeoffroy, Matthew January 2001 (has links)
Internet-based electronic commerce is a rapidly evolving phenomenon. Organisations have reacted to the opportunities that have been presented through electronic commerce as new class of strategic information system that can be defined as an Internet Protocol Based Information System (IPBIS). As the demand for IPBIS grows, organisations are looking for ways to use it in order to leverage strategic advantage within their given markets. However, IPBIS are not yet established, and there are many unknowns surrounding its use and the change effects it may have on adopting organisations. Research is emerging that answers some of the organisational and electronic market issues that are being posed by organisations, but which are not being addressed by the increasing amounts of non-academic hyperbole that is in evidence. This study was conducted using a mixed mode of case study research within a grounded theory framework to explore the role of IPBIS as a contributing factor to organisational change. Twelve cases were studied using semi-structure interviews and observation, to assess technology implementation strategies, change effects, and management of change strategies. This study has revealed that organisations follow a staged model of integration that may start as a tentative venture with simple email facilities, and then moves through a set of discreet stages to potential full integration with internal information systems, which may be outsourced to third party solution providers. Evidence supports a substantive theory of 'Push-Pull Decision Taking' that was developed to provide an explanatory framework showing that organisations reach a stage of risk analysis and information elicited, and then feel compelled to participate in IPBIS electronic commerce initiatives, which are not always in the immediate interests of the organisation. The results of this decision taking are that the organisation and its actors try to develop appropriate management strategies, which typically support incremental change. This resulting model of change and a series of working propositions provide a basis for practitioner work, and further academic research in this domain.
|
45 |
Analysis of images under partial occlusionRamakrishnan, Sowmya January 2002 (has links)
In order to recognise objects from images of scenes that typically involve overlapping and partial occlusion, traditional computer vision systems have relied on domain knowledge to achieve acceptable performance. However there is much useful structural information about the scene, for example the resolution of figure-ground ambiguity, which can be recovered or at least plausibly postulated in advance of applying domain knowledge. This thesis proposes a generic information theoretic approach to the recognition and attribution of such structure within an image. It reinterprets the grouping process as a model selection process with MDL (minimum description length) as its information criterion. Building on the Gestalt notion of whole-part relations, a computational theory for grouping is proposed with the central idea that the description length of a suitably structured whole entity is shorter than that of its individual parts. The theory is applied in particular to form structural interpretations of images under partial occlusion, prior to the application of domain knowledge. An MDL approach is used to show that increasingly economical structural models (groups) are selected to describe the image data while combining lower level primitives to form higher level structures. From initially fitted segments, progressive groups are formed leading to closed structures that are eventually classified as foreground or background. Results are observed which conform well with human interpretations of the same scenes.
|
46 |
An approach for designing a real-time intelligent distributed surveillance systemValera Espina, Maria January 2006 (has links)
The main aim of this PhD is to investigate how a methodology rooted in systems engineering concepts can be established and applied to the design of distributed wide-area visual surveillance systems. Nowadays, the research community in surveillance systems tends to be mostly focused on the computer vision part of these systems, researching and developing more intelligent algorithms. The integration and finally the creation of the system per se, are usually regarded as a secondary priority. We postulate here that until a robust systems-centred, rather than algorithmic-centred approach is used, the realisation of realistic distributed surveillance systems is unlikely to happen. The future generation of surveillance systems can be categorised, from a system engineering point of view, as concurrent, distributed, embedded, real time systems. An important aspect of these systems is the inherent temporal diversity (heterogeneous timing) that arises from a variety of timing requirements and from the parallelisation and distribution of the processes that compose the system. Embedded, real-time systems are often naturally asynchronous. However, the computer vision part of these surveillance systems is commonly conceived and designed in a sequential and synchronous manner, in many cases using an object-oriented approach. Moreover, to cope with the distributed nature of these systems, technologies such as CORBA are applied. Designing processes in a synchronous manner plus the run-time overheads associated with object oriented implementations may cause communication bottlenecks. Perhaps more importantly, it may produce unpredictable behaviour of some components of the system and hence undetermined performance from a system as a whole. Clearly, this is a major problem on surveillance systems that can often be expected to be safety-critical. This research has explored the use of an alternative approach to object-orientation for the design and implementation of intelligent distributed surveillance systems. The approach is known as Real-Time Networks (exemplified by system engineering methodologies such as MASCOT and extensions such as DORIS). This approach is based conceptually on conceiving solutions as being naturally concurrent, from the highest level of abstraction, with concurrent activities communicating through well-defined data-centred mechanisms. The methodology favours a disciplined approach to design, which yields a modular structure that has close correspondence between functional elements in design and constructional elements for system integration. It is such characteristics that we believe will become essential in overcoming the complexities of going from small-scale computer vision prototypes to large-scale working systems. To justify the selection of this methodology, an overview of different software approach methods that may be used for designing wide-area intelligent surveillance systems is given. This is then, narrowed down to a comparison between Real-Time Networks and Object Orientation. The comparison is followed by an illustration of two different design solutions of an existing real-time distributed surveillance system called ADVISOR. One of the design solutions, based on Object Oriented concepts, uses CORBA as a means for the integration and distribution characteristics of the system. The other design solution, based on Real-Time Networks, uses DORIS methodology as a solution for the design of the system. Once the justification over the selection is done, a' novel design of a generic visual surveillance system using the proposed Real-Time Networks method is presented. Finally, the conclusions and future work are explained in the last chapter.
|
47 |
The optimisation of peer-to-peer overlays for mobile ad-hoc networksMillar, Grant P. January 2013 (has links)
The proliferation of smart devices and wireless interfaces has enabled the field of Mobile Ad-hoc Networks (MANETs) to flourish as the technology becomes realistically deployable. MANETs are created by a group of autonomous nodes that communicate with each other by establishing a multihop radio network and maintain connectivity in an infrastructureless and decentralised manner. Thus it becomes more important to examine how applications such as those used on the Internet can be deployed on top of such MANETs. Peer-to-peer (P2P) networks can be defined as decentralised application-layer overlay networks where traffic flows on top of the physical network such as the Internet. Such networks are formed dynamically on-the-fly and rely on no fixed infrastructure such as servers. On the Internet a great number of applications have exploited the properties of P2P networks, thus they have been used to provide services such as distributed data storage systems, distributed instant messaging systems, publish/subscribe systems, distributed name services Voice over IP (VolP) services and many more. This thesis proposes three novel P2P protocols. Reliable Overlay Based Utilisation of Services and Topology (ROBUST), which minimises end-to-end lookup delay while increasing lookup success rate compared with current state-of-the-art and is usable on any MANET routing protocol. It achieves this by using a hierarchal clustered topology where peers are clustered together with other peers in close proximity in order to reduce P2P routing hops and create a more efficient network. Proactive MANET DHT (PMDHT), which combines proactive MANET routing and DHT functionality in order to minimise end-to-end lookup delay while increasing lookup success rate compared with current state-of-the-art. This is achieved by heavily integrating the P2P functionality - by piggy-backing P2P messages on to routing messages - at the network layer using the proactive MANET routing protocol Optimized Link State Routing version 2 (OLSRv2). Using this method the P2P overlay topology exactly matches that of the underlying network, while all peers are fully aware of the state of that topology. This means the P2P lookups can be completed in one logical step. Reactive MANET P2P Overlay (RMP2P), which combines reactive MANET routing and DHT functionality in order to minimise end-to-end lookup delay while increasing lookup success rate compared with current state-of-the-art. in RMP2P we combine P2P lookup functionality with the MANET routing protocol Ad-hoc On-demand Distance Vec¬tor version 2 (AODVv2). In this case we piggy-back P2P lookups on to the routing request messages if possible, decreasing overhead and latency in the network. We evaluate the performance of the proposed novel architectures by developing a custom made packet level simulator using ns-2 (network simulator-2), the results show that these architectures out perform the current state-of-the-art P2P overlays in specific scenarios. The ROBUST protocol is suited to scenarios where the underlying routing protocol cannot be modified. The PMDHT protocol performs best overall in networks which require more scalability. The RMP2P protocol performs best in networks with high mobility. We end this thesis with our conclusions and avenues for future work in the field of P2P networks for MANETs.
|
48 |
Energy and spectrum efficient future wireless networksAdigun, Olayinka January 2014 (has links)
Future Wireless Networks (FWN) will be heterogeneous and dynamic networks comprising of different wireless technologies such as cellular technologies; (LTE and LTE-A), Wireless Local Area Networks (WLAN), WiMAX and Wireless Sensor Networks (WSN). They are expected to provide high data rate in excess of I Gbit/s in a variety of scenarios involving mobile users. A number of technologies such as; Multiple Input Multiple Output (MIMO) antennas, Cognitive Radio (CR), Orthogonal Frequency Division Multiple Access (OFDMA), Dynamic Spectrum Access (DSA), Cooperative Communication, white space and 60GHz transmission have been identified as enablers of FWN. However, two critical challenges still facing the realization of the targets of FWNs are enormous energy consumption and limited spectrum bands useful for wireless communications. This thesis has focused on two enabling technologies in future wireless networks; MIMO antennas and Cognitive Radio technology. These two technologies have been chosen as they have the capability to tackle both energy optimization and spectrum scarcity challenges in FWN. This thesis has investigated energy and spectrum efficiency in MIMO antenna technology and has used the Long Term Evolution (LTE); which is positioned to be a strong player amongst cellular technologies in FWN as a case study. The work has presented adapted energy efficiency metrics which serves as a basis for comparison and has shown various relationships between the numbers of transmit and receive antennas, the feedback information and the energy and spectral efficiency of various MIMO schemes in LTE. This thesis has also investigated energy and spectrum efficiency in Cognitive Radio technology. In dealing with energy efficiency in cognitive radio environment, the options of making CR operations more energy efficient and an analytical evaluation of energy consumed at different stages of secondary spectrum usage have been explored. In dealing with spectrum efficiency in cognitive radio environment, this work has investigated and proposed a spectrum decision and allocation scheme whose performance evaluation confirms it has the ability to offer better utilisation of spectrum holes and offer better spectral efficiency.
|
49 |
Cross-layer design for scalable/3D wireless video transmission over IP networksAppuhami Ralalage, Harsha Nishantha Deepal January 2014 (has links)
The first two parts of the thesis address the issues related to 3D video transmission over wireless networks and proposes cross-layer design techniques to optimise the information exchange between dif- ferent Open Systems Interconnection (OSI) layers or system blocks. In particular, the first section of this thesis exploits the flexibility of adjusting the checksum coverage length of the transport layer pro- tocol, UDP-lite as opposed to its counterpart UDP. The study pro- poses an optimum checksum coverage length to protect only impor- tant header information of an H.264 encoded video transmission over wireless links, together with robust header compression (RoHC) and Automatic Retransmission Request (ARQ). The second part of the thesis investigates a content and Channel aware Medium Access Con- trol (MAC) layer scheduling algorithm by considering the layer prior- ities of an H.264 Scalable Video Coding (SVC) encoded 3D video transmission over an Orthogonal Frequency Division Multiple Ac- cess (OFDMA) based wireless link with a prioritised queuing tech- nique to improve the Quality of Experience (QoE) of the end users. A considerable amount of research time was devoted to investigat- ing accurate, consistent and real-time quality evaluation techniques for 3D image/ video as cross-layer design techniques mostly rely on the quality feedbacks from end users to optimise system parameters. The first quality metric proposed is a stereoscopic image quality met- ric using the disparity histogram of the left and right views. A 3D stereoscopic video quality evaluation technique is proposed, based on the predominant energy distribution of gradients using 3D structural tensors in the next section. Finally, a near no reference quality metric is proposed for colour plus depth 3D video compression and transmis- sion, using the extracted edge information of colour images and depth maps. The research investigates a number of error resilient transmission methods to combat artifacts in 3D video delivery over wireless chan- nels. A Region-of-Interest (ROI) based transmission method for stereo- scopic videos has been proposed to mark the important areas of the video and provide Unequal Error Protection (UEP) during transmis- sion. Next, we investigate the effects of compression and packet loss on the rendered video quality and propose a model to quantify ren- dering and concealment errors at the sender-side and then use the information generated through the model to effectively deliver 3D. Finally an asymmetric coding approach is suggested for 3D medical video transmitted over band limited wireless networks by considering large data rates associated with 3D medical video as they are usually captured in high resolution and pixel depth. Key words: 3D video transmission, Cross-layer design, Orthogonal frequency-division multiple access, H.264 video compression, Scalable video coding, Robust header compression, automatic retransmission request, Quality of experience, Prioritized 3D video transmission, Un- equal error protection.
|
50 |
A QoS framework for modeling and simulation of distributed services within a cloud environmentOppong, Eric Asamoah January 2014 (has links)
Distributed computing paradigm such as Cloud and SOA provides the architecture and medium for service computing which gives flexibility to organisations in implementing IT solutions meeting specific business objectives. The advancement of internet technology opens up the use of service computing and broaden the scope into areas classified as utility computing, computing solutions modelled as services allowing consumers to use and pay for solutions that includes applications and physical devices. The model of service computing offers great opportunity in cutting cost and deployment but also presents a case for user demands changes that is different from the usual service level agreement in computing deployment. Service providers must consider different aspects of consumer demands in provisioning of services including non-functional requirements such as Quality of Service, this not only relates to the users expectations but also managing the effective distribution of resources and applications. The normal model for meeting user requirements is over-stretched and therefore requires more information gathering and analysis of requirements that can be used to determine effective management in service computing by leveraging SOA and Cloud computing based on QoS factors. A model is needed to consider multiple criteria in decision making to enable proper mapping of resources from service composition level to resource provision for processing user request, a framework that is capable of analysing service composition and resource requirements. Thus, the aim of the thesis is to develop a framework for enabling service allocation in Cloud Computing based on SOA and QoS for analysing user requirements to ensure effective allocation and performance in a distributed system. The framework is designed to handle the top layer of user requirements in terms of application development and the lower layer of resource management, analysing the requirement in terms of QoS in order to identify the common factors that matches the user requirement and the available resources. The framework is evaluated using Cloudsim simulator to test its effectiveness in improving service and resource allocation in Distributed Computing environment. This approach offers a greater flexible to overcome issues of over-provisioning and underprovisioning of resources by maintaining an effective provisioning using Service Oriented QoS Enabled Framework (SOQ-Framework) for requirement analysis of service composition and resource capabilities.
|
Page generated in 0.1306 seconds