Spelling suggestions: "subject:"used.based systems"" "subject:"casebased systems""
1 |
An agent-based framework to support adaptive hypermediaBailey, Christopher Paul January 2002 (has links)
The field of adaptive hypermedia is a little over a decade old. It has a rich history in a range of fields such as artificial intelligence, user modelling, intelligent tutoring systems and hypertext. Early adaptive hypermedia work concentrated on application-led research; developing a range of systems for specific purposes. In 1996, Peter Brusilovsky reviewed the state-of-the-art and proposed a taxonomy of adaptive hypermedia techniques, thereby providing the means to categorise adaptive hypermedia systems. Since then, several practical frameworks for adaptive hypermedia applications have been produced, in addition to formal models for formalising adaptive hypermedia applications. This thesis presents a new framework for adaptive hypermedia systems based on agent technology, a field of research largely ignored within the adaptive community. Conceptually, this framework occupies a middle ground between the formal reference models for adaptive hypermedia and application-specific frameworks. This framework provides the means to implement formal models using variety of architectural approaches. Three novel adaptive hypermedia applications have been developed around this agent-based framework. Each system employs different architectural structures, they model the user with a variety of profiling techniques, and each provides a different set of adaptive features. The diversity of these three systems emphasises the flexibility and functionality of this proposed agent-based framework.
|
2 |
Compression's effect on end-to-end latency in file upload systems that utilize encryptionZaar, Kristoffer January 2023 (has links)
Encryption is the process of obfuscating data to restrict access to it while allowing it to be returned to its original non-obfuscated form through decryption. This process is increasingly used within web-based systems to secure data. When encryption is introduced in a system, the overall end-to-end latency of the system typically increases, and this increase depends on the size of the input data given to the encryption algorithm. Arguably, the latency introduced by encryption can be reduced if data sizes can be reduced before encryption. Lossless compression is the process of taking some data and reducing its overall data footprint. Introducing such a process within a web-based system that uses encryption could have the potential of reducing overall end-to-end latency within the system, both by reducing encryption time and data transfer time. This thesis evaluates whether the introduction of compression can reduce end-to-end latency in a web-based file upload system that encrypts the received files before storage. A series of experiments have been performed on a created file upload system where compression has been implemented to be used before upload and encryption. The results of these experiments show that compression can reduce end-to-end latency within web-based file upload systems that use encryption by approximately 39% for upload scenarios and approximately 49% for download scenarios when running in a system configuration with network latency.
|
3 |
Hazus-MH flood loss estimation on a web-based systemYildirim, Enes 01 August 2017 (has links)
In last decades, the importance of flood damage and loss estimation systems has increased significantly because of its social and economic outcomes. Flood damage and loss estimation systems are useful to understand possible impacts of flooding and prepare better resilience plans to manage and allocate resources for emergency decision makers. Recent web-based technologies can be utilized to create a system that can help to analyze flood impact both on the urban and rural area. With taking advantage of web-based systems, decision makers can observe effects of flooding considering many different scenarios with requiring less effort. Most of the emergency management plans have been created using paper-based maps or GIS (Geographical Information System) software. Paper-based materials generally illustrate floodplain maps and give basic instructions about what to do during flooding event and show main roads to evacuate people from their neighborhood. After the development of GIS (Geographic Information System) software, these plans have been prepared with giving more detail information about demographics, building, critical infrastructure etc.
With taking advantage of GIS, there are several software have been developed for the understanding of disaster impacts on the community. One of the widely-used GIS-based software called Hazus-MH (Multi-Hazard) which is created by FEMA (Federal Emergency Management Agency) can analyze disaster effects on both urban and rural area. Basically, it allows users to run a disaster simulation (earthquake, hurricane, and flood) to observe disaster effects. However, its capabilities are not broad as web-based technologies. Hazus-MH has some limitations in terms of working with specific software requirements, the ability to show a limited number of flood scenarios and lack of representing real time situation. For instance, the software is only compatible with Windows operated computers and specific version of ArcMap rather than other GIS software. Users must have GIS expertise to operate the software. In contrast, web-based system allows use to reduce all these limitations. Users can operate the system using the internet browser and do not require to have GIS knowledge. Thus, hundreds of people can connect to the system, observe flood impact in real time and explore their neighborhood to prepare for flooding.
In this study, Iowa Flood Damage Estimation Platform (IFDEP) is introduced. This platform is created using various data sources such as floodplain maps and rasters which are created by IFC (Iowa Flood Center), default Hazus-MH data, census data, National Structure Inventory, real-time USGS (United States Geological Survey) Stream gage data, real time IFC bridge sensor data, and flood forecast model which created by IFC. To estimate damage and loss, damage curves which are created by Army Corps of Engineers are implemented. All of these data are stored in PostgreSQL. Therefore, hundreds of different flood analyses can be queried with making cross-sectional analyses between floodplain data and census data. Regarding to level analyses which are defined by FEMA as three level, Level 3 type analysis can be done on the fly with using web-based technology. Furthermore, better and more accurate results are presented to the users. Using real-time stream gauge data and flood forecast data allow to demonstrate current and upcoming flood damage and loss which cannot be provided by current GIS-based desktop software. Furthermore, analyses are visualized using JavaScript and HTML5 for better illustration and communication rather than using limited visualization selection of GIS software.
To give the vision of this study, IFDEP can be widened using other data sources such as National Resources Inventory, National Agricultural Statistics Service, U.S. census data, Tax Assessor building data, land use data and more. This can be easily done on the database side. Need to address that augmented reality (AR) and virtual reality (VR) technologies can enhance to broad capabilities of this platform. For this purpose, Microsoft HoloLens can be utilized to connect IFDEP, real-time information can be visualized through the device. Therefore, IFDEP can be recruited both on headquarters for emergency managers and on the field for emergency management crew.
|
4 |
The use of systems development methodologies in web-based application development in South Africa / Martin Allen TaylorTaylor, Martin Allen January 2006 (has links)
This study investigated the use of systems development methodologies in Web-based
application development in South Africa. Web-based systems differ from
traditional information systems by integrating different media for knowledge
representation and utilizing hypertext functionality. By doing this, Web-based
systems not only support creation, integration, analysis, and distribution but also
storage and transfer of knowledge of business transactions within a structured
information system.
There are numerous methodologies available to develop Web-based systems. In this
study five of these methodologies were discussed. The methodologies include Web
IS Development Methodology (WISOM), Internet Commerce Development
Methodology (ICOM), Web Engineering, Extreme Programming and the Relationship
Management Methodology (RMM).
In this study a qualitative research approach was followed. Case studies were done
on three different organizations in the South African marketplace. Semi-structured
interviews were used for data collection at each organization. The interviews were
transcribed, and the data were analysed using content analysis and cross-case
analysis. One of the main goals of this research was to determine "how" system
development methodologies are used in practice to develop Web-based systems,
and to what extent it is used.
The research pointed out that those organizations who participated in this study in
South Africa mainly use in-house developed methodologies to develop Web-based
systems, and that these organizations adhere strictly to their methodology. The main
reasons organizations choose to use methodologies are that methodologies aid in
the delivery of a better quality Web-based system, and also act as a good project
management mechanism within the organization. / Thesis (M.Com. (Computer Science))--North-West University, Potchefstroom Campus, 2007
|
5 |
The use of systems development methodologies in web-based application development in South Africa / Martin TaylorTaylor, Martin Allen January 2006 (has links)
Thesis (M.Com. (Computer Science))--North-West University, Potchefstroom Campus, 2007.
|
6 |
The use of systems development methodologies in web-based application development in South Africa / Martin Allen TaylorTaylor, Martin Allen January 2006 (has links)
This study investigated the use of systems development methodologies in Web-based
application development in South Africa. Web-based systems differ from
traditional information systems by integrating different media for knowledge
representation and utilizing hypertext functionality. By doing this, Web-based
systems not only support creation, integration, analysis, and distribution but also
storage and transfer of knowledge of business transactions within a structured
information system.
There are numerous methodologies available to develop Web-based systems. In this
study five of these methodologies were discussed. The methodologies include Web
IS Development Methodology (WISOM), Internet Commerce Development
Methodology (ICOM), Web Engineering, Extreme Programming and the Relationship
Management Methodology (RMM).
In this study a qualitative research approach was followed. Case studies were done
on three different organizations in the South African marketplace. Semi-structured
interviews were used for data collection at each organization. The interviews were
transcribed, and the data were analysed using content analysis and cross-case
analysis. One of the main goals of this research was to determine "how" system
development methodologies are used in practice to develop Web-based systems,
and to what extent it is used.
The research pointed out that those organizations who participated in this study in
South Africa mainly use in-house developed methodologies to develop Web-based
systems, and that these organizations adhere strictly to their methodology. The main
reasons organizations choose to use methodologies are that methodologies aid in
the delivery of a better quality Web-based system, and also act as a good project
management mechanism within the organization. / Thesis (M.Com. (Computer Science))--North-West University, Potchefstroom Campus, 2007
|
7 |
On learning and visualizing lexicographic preference treesMoussa, Ahmed S. 01 January 2019 (has links)
Preferences are very important in research fields such as decision making, recommendersystemsandmarketing. The focus of this thesis is on preferences over combinatorial domains, which are domains of objects configured with categorical attributes. For example, the domain of cars includes car objects that are constructed withvaluesforattributes, such as ‘make’, ‘year’, ‘model’, ‘color’, ‘body type’ and ‘transmission’.Different values can instantiate an attribute. For instance, values for attribute ‘make’canbeHonda, Toyota, Tesla or BMW, and attribute ‘transmission’ can haveautomaticormanual. To this end,thisthesis studiesproblemsonpreference visualization and learning for lexicographic preference trees, graphical preference models that often are compact over complex domains of objects built of categorical attributes. Visualizing preferences is essential to provide users with insights into the process of decision making, while learning preferences from data is practically important, as it is ineffective to elicit preference models directly from users.
The results obtained from this thesis are two parts: 1) for preference visualization, aweb- basedsystem is created that visualizes various types of lexicographic preference tree models learned by a greedy learning algorithm; 2) for preference learning, a genetic algorithm is designed and implemented, called GA, that learns a restricted type of lexicographic preference tree, called unconditional importance and unconditional preference tree, or UIUP trees for short. Experiments show that GA achieves higher accuracy compared to the greedy algorithm at the cost of more computational time. Moreover, a Dynamic Programming Algorithm (DPA) was devised and implemented that computes an optimal UIUP tree model in the sense that it satisfies as many examples as possible in the dataset. This novel exact algorithm (DPA), was used to evaluate the quality of models computed by GA, and it was found to reduce the factorial time complexity of the brute force algorithm to exponential. The major contribution to the field of machine learning and data mining in this thesis would be the novel learning algorithm (DPA) which is an exact algorithm. DPA learns and finds the best UIUP tree model in the huge search space which classifies accurately the most number of examples in the training dataset; such model is referred to as the optimal model in this thesis. Finally, using datasets produced from randomly generated UIUP trees, this thesis presents experimental results on the performances (e.g., accuracy and computational time) of GA compared to the existent greedy algorithm and DPA.
|
Page generated in 0.0702 seconds