Spelling suggestions: "subject:"verver"" "subject:"cerver""
991 |
Mitteilungen des URZ 1/2009Riedel, Wolfgang 25 March 2009 (has links)
Informationen des Universitätsrechenzentrums mit Jahresrückblick 2008 zu den aktuellen Projekten und Diensten des URZ:Jahresrückblick 2008
Softwareausstattung der Ausbildungspools
Kurzinformationen
Software-News
|
992 |
Mitteilungen des URZ 1/2010Riedel, Wolfgang, Schier, Thomas 01 March 2010 (has links)
Informationen des Universitätsrechenzentrums mit
Jahresrückblick 2009 zu den aktuellen Projekten und
Diensten des URZ:Jahresrückblick 2009
Neue Technologie im Campus-Backbone - Virtual Switching System (VSS)
Softwareausstattung der Ausbildungspools
Kurzinformationen
Software-News
|
993 |
Prostředí pro tvorbu interaktivních webových stránek / Interactive Web Page Design EnvironmentMoravec, Jaroslav January 2008 (has links)
This master's thesis describes an environment for creation and management of interactive web pages. It deals with both the structure design and the visual part. The basic idea is that the page consists of individual elements that can be arbitrarily composed together. There exist several kinds of such elements: interactive, content, database and informative elements. Furthermore, the environment includes tools for account management, access control, database administration, auditing, multilanguage support and some more.
|
994 |
Aplikace pro prezentaci a modifikaci dat v přenosných zařízeních / Application for Data Presentation and Modification in Mobile DevicesKučera, Pavel January 2009 (has links)
This thesis is aimed to study the possibility of usage of mobile devices in the sphere of automation. One automation company needed to have in hand some PDA-based tool for a remote control of the technological process. There is analysis of ability of contemporary portable devices to run a application software. According to the specified requirements there is designed and implemented the client-server system based on the VNC technology. The server is a Linux-based PC server implemented using Xinetd daemon and the Xvnc. The two standard VNC clients were made. A free software available under the GPL license was modified to implement them. One runs on Windows Mobile operating system and the other one is a Java MIDlet.
|
995 |
Client-Server Communications Efficiency in GIS/NIS Applications : An evaluation of communications protocols and serialization formats / Kommunikationseffektivitet mellan klient och server i GIS/NIS-applikationer : En utvärdering av kommunikationsprotokoll och serialiseringsformatKlingestedt, Kashmir January 2018 (has links)
Geographic Information Systems and Network Information Systems are important tools for our society, used for handling geographic spatial data and large information networks. It is therefore important to make sure such tools are of high quality. GIS/NIS applications typically deal with a lot of data, possibly resulting in heavy loads of network traffic. This work aims to evaluate two different communications protocols and serialization formats for client-server communications efficiency in GIS/NIS applications. Specifically, these are HTTP/1.1, HTTP/2, Java Object Serialization and Google's Protocol Buffers. They were each implemented directly into a commercial GIS/NIS environment and evaluated by measuring two signature server calls in the system. Metrics that were examined are call duration, HTTP overhead size and HTTP payload size. The results suggest that HTTP/2 and Google's Protocol Buffers outperform HTTP/1.1 and Java Object Serialization respectively. An 87% decrease in HTTP overhead size was achieved when switching from HTTP/1.1 to HTTP/2. The HTTP payload size is also shown to decrease with the use of Protocol Buffers rather than Java Object Serialization, especially for communications where data consist of many different object types. Concerning call duration, the results suggest that the choice of communications protocol is more significant than the choice of serialization format for communications containing little data, while the opposite is true for communications containing much data. / Geografiska informationssystem och nätverksinformationssystem är viktiga redskap för vårt samhälle, vilka används för hantering av geografisk data och stora informationsnätverk. Det är därför viktigt att se till att sådana system är av hög kvalitet. GIS/NIS-applikationer behandlar vanligtvis stora mängder data, vilket kan resultera i mycket nätverkstrafik. I det här arbetet utvärderas två olika kommunikationsprotokoll och serialiseringsformat för kommunikationseffektivitet mellan klient och server i GIS/NIS-applikationer. Specifikt är dessa HTTP/1.1, HTTP/2, Java Objektserialisering och Googles Protocol Buffers. De implementerades var och en i en kommersiell GIS/NIS-miljö och utvärderades genom mätningar av två signaturanrop i systemet. De aspekter som observerades är kommunikationstiden, mängden HTTP-overhead och mängden HTTP-payload. Resultaten tyder på att HTTP/2 och Googles Protocol Buffers presterar bättre än HTTP/1.1 respektive Java Objektserialisering. En 87% minskning av mängden HTTP overhead uppnåddes då HTTP/1.1 ersattes med HTTP/2. En minskning av mängden HTTP payload observeras också med användning av Protocol Buffers snarare än Java Objektserialisering, särskilt för kommunikationer där data innehåller många olika objekttyper. Gällande kommunikationstiden tyder resultaten på att valet av kommunikationsprotokoll påverkar mer än valet av serialiseringsformat för kommunikationer med små mängder data, medan motsatsen gäller för kommunikationer med mycket data.
|
996 |
An approach to automate the adaptor software generation for tool integration in Application/ Product Lifecycle Management tool chains.Singh, Shikhar January 2016 (has links)
An emerging problem in organisations is that there exist a large number of tools storing data that communicate with each other too often, throughout the process of an application or product development. However, no means of communication without the intervention of a central entity (usually a server) or storing the schema at a central repository exist. Accessing data among tools and linking them is tough and resource intensive. As part of the thesis, we develop a software (also referred to as ‘adaptor’ in the thesis), which, when implemented in the lifecycle management systems, integrates data seamlessly. This will eliminate the need of storing database schemas at a central repository and make the process of accessing data within tools less resource intensive. The adaptor acts as a wrapper to the tools and allows them to directly communicate with each other and exchange data. When using the developed adaptor for communicating data between various tools, the data in relational databases is first converted into RDF format and is then sent or received. Hence, RDF forms the crucial underlying concept on which the software will be based. The Resource description framework (RDF) provides the functionality of data integration irrespective of underlying schemas by treating data as resource and representing it as URIs. The model of RDF is a data model that is used for exchange and communication of data on the Internet and can be used in solving other real world problems like tool integration and automation of communication in relational databases. However, developing this adaptor for every tool requires understanding the individual schemas and structure of each of the tools’ database. This again requires a lot of effort for the developer of the adaptor. So, the main aim of the thesis will be to automate the development of such adaptors. With this automation, the need for anyone to manually assess the database and then develop the adaptor specific to the database is eliminated. Such adaptors and concepts can be used to implement similar solutions in other organisations faced with similar problems. In the end, the output of the thesis is an approachwhich automates the process of generating these adaptors. / Resource Description Framework (RDF) ger funktionaliteten av dataintegration, oberoende av underliggande scheman genom att behandla uppgifter som resurs och representerar det som URI. Modellen för Resource Description Framework är en datamodell som används för utbyte och kommunikation av uppgifter om Internet och kan användas för att lösa andra verkliga problem som integrationsverktyg och automatisering av kommunikation i relationsdatabaser. Ett växande problem i organisationer är att det finns ett stort antal verktyg som lagrar data och som kommunicerar med varandra alltför ofta, under hela processen för ett program eller produktutveckling. Men inga kommunikationsmedel utan ingripande av en central enhet (oftast en server) finns. Åtkomst av data mellan verktyg och länkningar mellan dem är resurskrävande. Som en del av avhandlingen utvecklar vi en programvara (även hänvisad till som "adapter" i avhandlingen), som integrerar data utan större problem. Detta kommer att eliminera behovet av att lagra databasscheman på en central lagringsplats och göra processen för att hämta data inom verktyg mindre resurskrävande. Detta kommer att ske efter beslut om en särskild strategi för att uppnå kommunikation mellan olika verktyg som kan vara en sammanslagning av många relevanta begrepp, genom studier av nya och kommande metoder som kan hjälpa i nämnda scenarier. Med den utvecklade programvaran konverteras först datat i relationsdatabaserna till RDF form och skickas och tas sedan emot i RDF format. Således utgör RDF det viktiga underliggande konceptet för programvaran. Det främsta målet med avhandlingen är att automatisera utvecklingen av ett sådant verktyg (adapter). Med denna automatisering elimineras behovet att av någon manuellt behöver utvärdera databasen och sedan utveckla adaptern enligt databasen. Ett sådant verktyg kan användas för att implementera liknande lösningar i andra organisationer som har liknande problem. Således är resultatet av avhandlingen en algoritm eller ett tillvägagångssätt för att automatisera processen av att skapa adaptern.
|
997 |
Towards a model for teaching distributed computing in a distance-based educational environmentLe Roux, Petra 02 1900 (has links)
Several technologies and languages exist for the development and implementation of distributed systems. Furthermore, several models for teaching computer programming and teaching programming in a distance-based educational environment exist. Limited literature, however, is available on models for teaching distributed computing in a distance-based educational environment. The focus of this study is to examine how distributed computing should be taught in a distance-based educational environment so as to ensure effective and quality learning for students. The required effectiveness and quality should be comparable to those for students exposed to laboratories, as commonly found in residential universities. This leads to an investigation of the factors that contribute to the success of teaching distributed computing and how these factors can be integrated into a distance-based teaching model. The study consisted of a literature study, followed by a comparative study of available tools to aid in the learning and teaching of distributed computing in a distance-based educational environment. A model to accomplish this teaching and learning is then proposed and implemented. The findings of the study highlight the requirements and challenges that a student of distributed computing in a distance-based educational environment faces and emphasises how the proposed model can address these challenges. This study employed qualitative research, as opposed to quantitative research, as qualitative research methods are designed to help researchers to understand people and the social and cultural contexts within which they live. The research methods employed are design research, since an artefact is created, and a case study, since “how” and “why” questions need to be answered. Data collection was done through a survey. Each method was evaluated via its own well-established evaluation methods, since evaluation is a crucial component of the research process. / Computing / M. Sc. (Computer Science)
|
998 |
The taxation of electronic commerce and the implications for current taxation practices in South AfricaDoussy, Elizabeth 01 January 2002 (has links)
This study analyses the nature and implementation of electronic commerce in order to
identify possible problems for taxation and pinpoint those problems which may be relevant
to South Africa. Solutions suggested by certain countries and institutions are evaluated for
possible implementation in South Africa.
The study suggests that although current taxation legislation in South Africa is apP'icable
to electronic commerce transactions it is not sufficient to cater effectively for this type of
business. The conclusion reached Is that international co-operation is essential in finding
solutions. A number of recommendations are made regarding aspects of South African
taxation legislation which need to be clarified through policy decisions.
Title of / Taxation / M.Comm.
|
999 |
工業標準伺服器事業在台灣惠普及康柏公司合併前後公司組織及營運策略的研究鍾易良 Unknown Date (has links)
2001年9月4日HP(惠普電腦)的執行長Carly Fiorina宣佈將以250億美元價格收購Compaq(康柏電腦),這樁購併案不但金額上是電腦業過去最大的一筆交易的兩倍,而且也是惠普公司六十三年歷史中最重要的一樁買賣。HP和Compaq合併後,將成為全世界第二大及名符其實的「日不落」電腦公司。合併後,這兩家公司過去四季以來的營收總合為874億美元,僅次於IBM。
本研究係以惠普台灣分公司ESG(企業系統事業群)中,其中一項產品「工業標準伺服器」事業部為個案的研究。探討惠普及康柏兩家公司合併前後,此產品事業部的公司組織及營運策略。藉由探討策略的三構面:(1)營運範疇(2)核心資源(3)關係網路,以及運用策略九說中「四競技場」的分析架構:(1)價值與效率的競技場(2)結構與能耐的競技場(3)體系與實力的競技場(4)同形與異質的競技場。了解合併前後三家公司在經營此事業的不同處,並對關連性問題提出建議。
期望本研究三點主要研究目的,對於同質性企業合併時,可以提供有意義參考的價值:
一.從策略三構面:(1)營運範疇(2)核心資源(3)關係網路,剖析合併前 後三個案公司。
二.從四個競技場:(1)價值與效率的競技場(2)結構與能耐的競技場(3)體系與實力的競技場(4)同形與異質的競技場的分析,了解合併前後三個案公司的競爭優勢。
三.惠普台灣分公司在工業標準伺服器事業,合併後的營運績效是否達到合併綜效的目標。
藉由本研究的結論,對於想進行同質性企業購併的企業經營者,可以找到更有意義的啟發。乃至於利用更短的學習曲線,創造合併後的獲利契機。 / The Graduate Institute of Business Administration
National ChengChi University
The Study of organization and business strategies of Industry Standard Server in Hewlett Packard Taiwan before and after the merger of Hewlett Packard and Compaq Computer
On September 4th, 2001, Carly Fiorina, CEO of Hewlett Packard announced the merger of Hewlett Packard and Compaq Computer at USD25 billion dollars. This amount was in fact twice as much as the biggest merger that had ever happened in IT industry, and moreover, it is the most important decision that has been made in HP’s 63-year history.
After the merger, Hewlett Packard has become the worldwide number two computer company, an empire where the sun never sets. The annual revenue of new Hewlett Packard totals 87.4 billion dollars, only second to IBM.
This research is to pick the product “Industry Standard Server” that belongs to the business unit of Enterprise Solution Group in Hewlett Packard Taiwan to do the case study. The theories I will be using are:
1) the three dimensions of business strategy:
a) business scope;
b) core resources; and
c) relationship network and
2) the four competitive strategies:
a) value and effectiveness;
b) position and strength;
c) group and core competence;
d) industry and differentiation
Through the change of the organization and its business strategy before and after the merger, I will come up how the three companies – Hewlett Packard, Compaq and the New Hewlett Packard have their own business strategies and will also provide suggestions to related questions.
By the end of this research, I hope some points can be made and provided to companies that are planning to have their horizontal integration. I will do so by observing and analyzing from the following three aspects:
1) To analyze the three companies by the three dimensions of business strategy
2) To analyze competition advantages of the three companies, before and after the merger.
3) To analyze how Industry Standard Server Business Unit in Hewlett Packard Taiwan achieves Synergy through the merger of the two companies
Finally, the intention of this research is to help and to provide advise to those companies that are planning to have their horizontal integration, and hopefully through this research, a shorten learning curve will benefit them and through such integration, synergy will be seen.
|
1000 |
Session hijacking attacks in wireless local area networksOnder, Hulusi 03 1900 (has links)
Approved for public release, distribution is unlimited / Wireless Local Area Network (WLAN) technologies are becoming widely used since they provide more flexibility and availability. Unfortunately, it is possible for WLANs to be implemented with security flaws which are not addressed in the original 802.11 specification. IEEE formed a working group (TGi) to provide a complete solution (code named 802.11i standard) to all the security problems of the WLANs. The group proposed using 802.1X as an interim solution to the deficiencies in WLAN authentication and key management. The full 802.11i standard is expected to be finalized by the end of 2004. Although 802.1X provides a better authentication scheme than the original 802.11 security solution, it is still vulnerable to denial-of-service, session hijacking, and man-in-the- middle attacks. Using an open-source 802.1X test-bed, this thesis evaluates various session hijacking mechanisms through experimentation. The main conclusion is that the risk of session hijacking attack is significantly reduced with the new security standard (802.11i); however, the new standard will not resolve all of the problems. An attempt to launch a session hijacking attack against the new security standard will not succeed, although it will result in a denial-of-service attack against the user. / Lieutenant Junior Grade, Turkish Navy
|
Page generated in 0.0501 seconds