• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

A dynamic programming model to solve optimisation problems using GPUs

O'Connell, Jonathan F. January 2017 (has links)
This thesis presents a parallel, dynamic programming based model which is deployed on the GPU of a system to accelerate the solving of optimisation problems. This is achieved by simultaneously running GPU based computations, and memory transactions, allowing computation to never pause, and overcoming the memory constraints of solving large problem instances. Due to this some optimisation problems, which are currently not solved in an exact manner for real world sized instances due to their complexity, are moved into the solvable realm. The model is implemented to solve, a range of different test problems, where artificially constructed test data is used to ensure good performance even in the worst cases. Through this extensive testing, we can be confident the model will perform well when used to solve real world test cases. Testing of the model was carried out using a range of different implementation parameters in relation to deployment on the GPU, in order to identify both optimal implementation parameters, and how the model will operate when running on different systems. All problems, when implemented in parallel using the model, show run-time improvements compared to the sequential implementations, in some instances up to hundreds of times faster, but more importantly also show high efficiency metrics for the utilisation of GPU resources. Throughout testing emphasis has been placed on GPU based metrics to ensure the wider generic applicability of the model. Finally, the parallel model allows for new problems to be defined through the use of a simple file format, enabling wider usage of the model.
422

Spatiotemporal user and place modelling on the geo-social web

Mohamed, Soha January 2017 (has links)
Users of Location-Based Social Networks (LBSN) are giving away information about their whereabouts, and their interactions in the geographic space. In comparison to other types of personal data, location data are sensitive and can reveal user’s daily routines, activities, experiences and interests in the physical world. As a result, the user is facing an information overload that overburdens him to make a satisfied decision on where to go or what to do in a place. Thus, finding the matching places, users and content is one of the key challenges in LSBNs. This thesis investigates the different dimensions of data collected on LBSNs and proposes a user and place modelling framework. In particular, this thesis proposes a novel approach for the construction of different views of personal user profiles that reflect their interest in geographic places, and how they interact with geographic places. Three novel modelling frameworks are proposed, the static user model, the dynamic user model and the semantic place model. The static user model is a basic model that is used to represent the overall user interactions towards places. On the other hand, the dynamic user model captures the change of the user’s preferences over time. The semantic place model identifies user activities in places and models the relationships between places, users, implicit place types, and implicit activities. The proposed models demonstrate how geographic place characteristics as well as implicit user interactions in the physical space can further enrich the user profiles. The enrichment method proposed is a novel method that combines the semantic and the spatial influences into user profiles. Evaluation of the proposed methods is carried out using realistic data sets collected from the Foursquare LBSN. A new Location and content recommendation methods are designed and implemented to enhance existing location recommendation methods and results showed the usefulness of considering place semantics and the time dimension when the proposed user profiles in recommending locations and content. The thesis considers two further related problems; namely, the construction of dynamic place profiles and computing the similarity between users on LBSN. Dynamic place profiles are representations of geographic places through users’ interaction with the places. In comparison to static place models represented in gazetteers and map databases, these place profiles provide a dynamic view of how the places are used by actual people visiting and interacting with places on the LBSN. The different views of personal user profiles constructed within our framework are used for computing the similarity between users on the LBSN. Temporal user similarities on both the semantic and spatial levels are proposed and evaluated. Results of this work show the challenges and potential of the user data collected on LBSN.
423

Hybridisation of GNSS with other wireless/sensors technologies onboard smartphones to offer seamless outdoors-indoors positioning for LBS applications

Maghdid, Halgurd January 2015 (has links)
Location-based services (LBS) are becoming an important feature on today’s smartphones (SPs) and tablets. Likewise, SPs include many wireless/sensors technologies such as: global navigation satellite system (GNSS), cellular, wireless fidelity (WiFi), Bluetooth (BT) and inertial-sensors that increased the breadth and complexity of such services. One of the main demand of LBS users is always/seamless positioning service. However, no single onboard SPs technology can seamlessly provide location information from outdoors into indoors. In addition, the required location accuracy can be varied to support multiple LBS applications. This is mainly due to each of these onboard wireless/sensors technologies has its own capabilities and limitations. For example, when outdoors GNSS receivers on SPs can locate the user to within few meters and supply accurate time to within few nanoseconds (e.g. ± 6 nanoseconds). However, when SPs enter into indoors this capability would be lost. In another vain, the other onboard wireless/sensors technologies can show better SP positioning accuracy, but based on some pre-defined knowledge and pre-installed infrastructure. Therefore, to overcome such limitations, hybrid measurements of these wireless/sensors technologies into a positioning system can be a possible solution to offer seamless localisation service and to improve location accuracy. This thesis aims to investigate/design/implement solutions that shall offer seamless/accurate SPs positioning and at lower cost than the current solutions. This thesis proposes three novel SPs localisation schemes including WAPs synchronisation/localisation scheme, SILS and UNILS. The schemes are based on hybridising GNSS with WiFi, BT and inertial-sensors measurements using combined localisation techniques including time-of-arrival (TOA) and dead-reckoning (DR). The first scheme is to synchronise and to define location of WAPs via outdoors-SPs’ fixed location/time information to help indoors localisation. SILS is to help locate any SP seamlessly as it goes from outdoors to indoors using measurements of GNSS, synched/located WAPs and BT-connectivity signals between groups of cooperated SPs in the vicinity. UNILS is to integrate onboard inertial-sensors’ readings into the SILS to provide seamless SPs positioning even in deep indoors, i.e. when the signals of WAPs or BT-anchors are considered not able to be used. Results, obtained from the OPNET simulations for various SPs network size and indoors/outdoors combinations scenarios, show that the schemes can provide seamless and locate indoors-SPs under 1 meter in near-indoors, 2-meters can be achieved when locating SPs at indoors (using SILS), while accuracy of around 3-meters can be achieved when locating SPs at various deep indoors situations without any constraint (using UNILS). The end of this thesis identifies possible future work to implement the proposed schemes on SPs and to achieve more accurate indoors SPs’ location.
424

Error estimation for simplifications of electrostatic models

Rahimi, Amir January 2016 (has links)
Based on a posteriori error estimation a method to bound the error induced by simplifying the geometry of a model is presented. Error here refers to the solution of a partial differential equation and a specific quantity of interest derived from it. Geometry simplification specifically refers to replacing CAD model features with simpler shapes. The simplification error estimate helps to determine whether a feature can be removed from the model by indicating how much the simplification affects the physical properties of the model as measured by a quantity of interest. The approach in general can also be extended to other problems governed by linear elliptic equations. Strict bounds for the error are proven for errors expressed in the energy norm. The approach relies on the Constitutive Relation Error to enable practically useful and computationally affordable bounds for error measures in the energy error norm. All methodologies are demonstrated for a second order elliptic partial differential equation for electrostatic problems. Finite element simplification error estimation code is developed to calculate the simplification error numerically. Numerical experiments for some geometric models of capacitors show satisfactory results for the simplification error bounds for a range of different deafeaturing cases and a quantity of interest, linear in the solution of the electrostatic partial differential equation. Overall the numerically calculated bounds are always valid, but are more or less accurate depending on the type of feature and its simplification. In particular larger errors may be overestimated, while good estimates for small errors can be achieved. This makes the bound overall suitable to decide whether simplifying a feature is acceptable or not.
425

Metaheuristics for designing efficient routes & schedules for urban transportation networks

John, Matthew P. January 2016 (has links)
This thesis tackles the Urban Transit Network Design Problem (UTNDP) which involves determining an efficient set of routes and schedules for public transit networks. The UTNDP can be divided into five subproblems as identified by Ceder and Wilson [24]: i) network design, ii) frequency setting, iii) timetable development, iv) bus scheduling, and v) driver scheduling, with each problem requiring the output of the previous. In this thesis we focus on the first two stages, network design and frequency setting. We identify that evaluation is a major bottleneck for the network design problem and propose alternative approaches with the aim of decreasing the computation time. A multi-objective evolutionary algorithm (MOEA) for the network design problem is then presented that trades-off the passenger and operator costs. A passenger wishes to travel from their origin to destination in the shortest possible time, whereas the network operator must provide an adequate level of service whilst balancing the operational costs i.e. number of drivers and vehicles. The proposed MOEA combines a heuristically seeded population, using a novel construction algorithm, with several genetic operators to produce improved results compared with the state of the art from the literature. We provide an evaluation of the effectiveness of the genetic operators showing that improved performance, in terms of the number of dominating and nondominating solutions, is achieved as the size of the problem instance increases. Four surrogate models are proposed and an empirical evaluation is performed to assess the solution quality versus run trade-off in each case. It is found that surrogate models perform well on large problem instances producing improved Pareto sets compared with the original algorithm due to the increased amount of evolution that is allowed to occur under fixed time limits. Finally we empirically evaluate three multi-objective approaches for the frequency setting problem utilising the route networks produced during our network design procedure. It is shown that a MOEA based on the NSGAII framework provides the best quality solutions due to the cost of evaluation when using a neighbourhood based approach such as multi-objective tabu search. Constraints on vehicle capacity and fleet size are then introduced. It is shown that such constraints vastly reduce the number of solutions from network design that can successfully undergo frequency setting. A discussion is then presented highlighting the limitations of conducting network design and frequency setting separately along with alternative approaches that could be used in the future. We conclude this thesis by summarising our findings and presenting topics for future works.
426

New weighting schemes for document ranking and ranked query suggestion

Plansangket, Suthira January 2017 (has links)
Term weighting is a process of scoring and ranking a term’s relevance to a user’s information need or the importance of a term to a document. This thesis aims to investigate novel term weighting methods with applications in document representation for text classification, web document ranking, and ranked query suggestion. Firstly, this research proposes a new feature for document representation under the vector space model (VSM) framework, i.e., class specific document frequency (CSDF), which leads to a new term weighting scheme based on term frequency (TF) and the newly proposed feature. The experimental results show that the proposed methods, CSDF and TF-CSDF, improve the performance of document classification in comparison with other widely used VSM document representations. Secondly, a new ranking method called GCrank is proposed for re-ranking web documents returned from search engines using document classification scores. The experimental results show that the GCrank method can improve the performance of web returned document ranking in terms of several commonly used evaluation criteria. Finally, this research investigates several state-of-the-art ranked retrieval methods, adapts and combines them as well, leading to a new method called Tfjac for ranked query suggestion, which is based on the combination between TF-IDF and Jaccard coefficient methods. The experimental results show that Tfjac is the best method for query suggestion among the methods evaluated. It outperforms the most popularly used TF-IDF method in terms of increasing the number of highly relevant query suggestions.
427

Dynamic detection and immunisation of malware using mobile agents

Al Sebea, Hussain January 2005 (has links)
At present, malicious software (mal-ware) is causing many problems on private networks and the Internet. One major cause of this includes outdated or absent security software to countermeasure these anomalies such as Antivirus software and Personal Firewalls. Another cause is that mal-ware can exploit weaknesses in software, notably operating systems. This can be reduced by use of a patch service, which automatically downloads patches to its clients. Unfortunately this can lead to new problems introduced by the patch server itself. The aim of this project is to produce a more flexible approach in which agent programs are dispatched to clients (which in turn run static agent programs), allowing them to communicate locally rather than over the network. Thus, this project uses mobile agents which are software agents which can be given an itinerary and migrate to different hosts, interrogating the static agents therein for any suspicious files. These mobile agents are deployed with a list of known mal-ware signatures and their corresponding cures, which are used as a reference to determine whether a reported suspect is indeed malicious. The overall system is responsible for Dynamic Detection and Immunisation of Mal-ware using Mobile Agents (DIMA) on peer to peer (P2P) systems. DIMA is be categorised under Intrusion Detection Systems (IDS) and deals with the specific branch of malicious software discovery and removal. DIMA was designed using Borland Delphi to implement the static agent due to its seamless integration with the Windows operating system, whereas the mobile agent was implemented in Java, running on the Grasshopper mobile agent environment, due to its compliance with several mobile agent development standards and in-depth documentation. In order to evaluate the characteristics of the DIMA system a number of experiments were carried out. This included measuring the total migration time and host hardware specification and its effect on trip timings. Also, as the mobile agent migrated, its size was measured between hops to see how this varied as more data was collected from hosts. The main results of this project show that the time the mobile agent took to visit all predetermined hosts increased linearly as the number of hosts grew (the average inter-hop interval was approximately 1 second). It was also noted that modifications to hardware specifications in a group of hosts had minimal effect on the total journey time for the mobile agent. Increasing a group of host's processor speeds or RAM capacity made a subtle difference to round trip timings (less than 300 milliseconds faster than a slower group of hosts). Finally, it was proven that as the agent made more hops, it increased in size due to the accumulation of statistical data collected (57 bytes after the first hop, and then a constant increase of 4 bytes per hop thereafter).
428

Automated process of network documentation

Campbell, Bryan January 2007 (has links)
Knowledge of network topologies is invaluable to system administrators regardless of the size of an enterprise. Yet this information is time consuming to collect, and even more so to be processed into easily consumable formats (i.e. visual maps). This is especially so when the culture within which administrators operate is more concerned with operational stability and continuity as deliverables rather than documentation and analysis. The time-cost of documentation impinges upon its own production. This continues to be the case although documentation is of increasing importance to nontechnical personnel in enterprises, and as a compliment/supplement to network management systems. This thesis puts forth a framework to largely automate the process of documenting network topologies. The framework is based on issues raised in recent research concerning the needs of IT administrators, and network discovery methods. An application is also described serving as a proof-of-concept for the central elements of the framework. This application was realized in the Microsoft Visual C# 2005 Express Edition programming environment using the C#.NET language. The compiled result is supported by the .NET Framework 2.0 runtime environment. The application provides for an administrator to control, through a graphical interface, the sequence of discovering a network and outputting visual documentation. For testing, Cisco Systems routers and switches, along with a Microsoft Windows-based laptop, were used to construct a mock network. Measurements of the performance of the application were recorded against the mock network in order to compare it to other methods of network discovery. Central to the application's implementation is a recognition that networks are more likely than not to be heterogeneous. That is, they will be comprised of equipment from more than a one vendor. This assumption focused the choices about the framework design and concept implementation toward open standard technologies. Namely, SNMP was selected for discovery and data gathering. XML is utilized for data storage. Data processing and document production is handled by XSL. Built around these technologies, the application successfully executed its design. It was able to query network devices and receive information from them about their configuration. It next stored that information in an XML document. Lastly, with no change to the source data, HTML and PDF documents were produced demonstrating details of the network. The work of this thesis finds that the open standard tools employed are both appropriate for, and capable of, automatically producing network documentation. Compared to some alternate tools, they are shown to be more capable in terms of speed, and more appropriate for learning about multiple layers of a network. The solution is also judged to be widely applicable to networks, and highly adaptable in the face of changing network environments. The choices of tools for the implementation were all largely foreign to the author. Apart from the prima face achievements, programming skills were significantly stretched, understanding of SNMP architecture was improved, and the basics of these XML languages was gained: XSLT, XPath, and XSL-FO.
429

Analysis and evaluation of network intrusion detection methods to uncover data theft

Corsini, Julien January 2009 (has links)
Nowadays, the majority of corporations mainly use signature-based intrusion detection. This trend is partly due to the fact that signature detection is a well-known technology, as opposed to anomaly detection which is one of the hot topics in network security research. A second reason for this fact may be that anomaly detectors are known to generate many alerts, the majority of which being false alarms. Corporations need concrete comparisons between different tools in order to choose which is best suited for their needs. This thesis aims at comparing an anomaly detector with a signature detector in order to establish which is best suited to detect a data theft threat. The second aim of this thesis is to establish the influence of the training period length of an anomaly Intrusion Detection System (IDS) on its detection rate. This thesis presents a Network-based Intrusion Detection System (NIDS) evaluation testbed setup. It shows the setup of two IDSes, the signature detector Snort and the anomaly detector Statistical Packet Anomaly Detection Engine (SPADE). The evaluation testbed also includes the setup of a data theft scenario (reconnaissance, brute force attack on server and data theft). The results from the experiments carried out in this thesis proved inconclusive, mainly due to the fact that the anomaly detector SPADE requires a configuration adapted to the network monitored. Despite the fact that the experimental results proved inconclusive, this thesis could act as documentation for setting up a NIDS evaluation testbed. It could also be considered as documentation for the anomaly detector SPADE. This statement is made from the observation that there is no centralised documentation about SPADE, and not a single research paper documents the setup of an evaluation testbed.
430

Evaluation of digital identity using Windows CardSpace

Fernandez Sepulveda, Antonio January 2008 (has links)
The Internet was initially created for academic purposes, and due to its success, it has been extended to commercial environments such as e-commerce, banking, and email. As a result, Internet crime has also increased. This can take many forms, such as: personal data theft; impersonation of identity; and network intrusions. Systems of authentication such as username and password are often insecure and difficult to handle when the user has access to a multitude of services, as they have to remember many different authentications. Also, other more secure systems, such as security certificates and biometrics can be difficult to use for many users. This is further compounded by the fact that the user does not often have control over their personal information, as these are stored on external systems (such as on a service provider's site). The aim of this thesis is to present a review and a prototype of Federated Identity Management system, which puts the control of the user's identity information to the user. In this system the user has the control over their identity information and can decide if they want to provide specific information to external systems. As well, the user can manage their identity information easily with Information Cards. These Information Cards contain a number of claims that represent the user's personal information, and the user can use these for a number of different services. As well, the Federated Identity Management system, it introduces the concept of the Identity Provider, which can handle the user's identity information and which issues a token to the service provider. As well, the Identity Provider verifies that the user's credentials are valid. The prototype has been developed using a number of different technologies such as .NET Framework 3.0, CardSpace, C#, ASP.NET, and so on. In order to obtain a clear result from this model of authentication, the work has created a website prototype that provides user authentication by means of Information Cards, and another, for evaluation purposes, using a username and password. This evaluation includes a timing test (which checks the time for the authentication process), a functionality test, and also quantitative and qualitative evaluation. For this, there are 13 different users and the results obtained show that the use of Information Cards seems to improve the user experience in the authentication process, and increase the security level against the use of username and password authentication. This thesis concludes that the Federated Identity Management model provides a strong solution to the problem of user authentication, and could protect the privacy rights of the user and returns the control of the user's identity information to the user.

Page generated in 0.0281 seconds