• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 8
  • 8
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 30
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Evaluierung von AJAX-basierten frameworks für das Web 2.0

Langer, André 20 April 2007 (has links) (PDF)
„Remote Scripting“-Anwendungen erleben seit einigen Jahren einen regelrechten Anfrageboom. Während aus usability-Sicht bisher eine strikte Unterscheidung zwischen Desktop-Anwendungen und Webapplikationen herrschte, finden sich seit einiger Zeit zunehmend Angebote im World Wide Web, die diese strikte Trennung verwischen lassen. Interaktive Nutzerdialoge, nebenläufige Prozessabarbeitung und visuelle Unterstützungsmittel wie Drag & Drop- Effekte halten auf Webseiten Einzug, die dem Nutzer bisher nur aus eigenständigen Softwareprodukten in einer spezifischen Betriebssystemumgebung bekannt waren. Viele dieser neuen Anwendungs- und Interaktionsmöglichkeiten im weltweiten Datennetz werden inzwischen unter dem Oberbegriff Web 2.0 zusammengefasst. Für den Nutzer bringt dieser neue Entwicklungstrend viele Vorteile: Ansprechende, intuitive Nutzerführungen ohne die Notwendigkeit, eine ganze Internetseite bei jedem Interaktionsschritt neu zu laden und ohne bemerkbaren zeitlichen Overhead. Was für den Nutzer Erleichterung bringen soll, bedeutet häufig für einen Programmierer zunächst Mehraufwand. Eine Technik zur Realisierung solcher so genannten Rich Internet Applications, die sich in den letzten beiden Jahren immer mehr in den Vordergrund gedrängt hat, wird unter der Bezeichnung AJAX zusammengefasst. Einen einheitlichen Standard gibt es dabei nicht, sodass fast täglich neue AJAX-basierte frameworks veröffentlicht werden, die dem Programmierer (wenigstens einen Teil der) Komplexität der Programmflusssteuerung abnehmen sollen. Aufgabe der Studienarbeit soll es daher sein, das inzwischen unüberschaubar gewordene Angebot an AJAX frameworks zu systematisieren und einen Überblick über Vor- und Nachteile ausgewählter Programmbibliotheken zu geben. Dafür ist ein Kriterienkatalog zu erarbeiten, der eine Bewertung der verschiedenen frameworks nach unterschiedlichen Gesichtspunkten ermöglicht. Besonderer Schwerpunkt ist dabei auf Kriterien aus Programmierersicht (Sprachunabhängigkeit, Overhead, Implementierungsmöglichkeiten,…) und Anwendersicht (Plattformanforderungen, Einarbeitungszeit, Ergebnisqualität, …) zu legen. Auf den Kriterienkatalog ist anschließend eine Auswahl an bereits existierenden, frei verfügbaren AJAX frameworks anzuwenden, die als zukünftig relevant eingeschätzt werden. Die Ergebnisse sind abschließend in einer Gesamtübersicht zu präsentieren, die eine objektive Empfehlung für Nutzer darstellen soll, die vor der Wahl stehen, welche AJAX Programmbibliothek sie zukünftig einsetzen sollten.
112

An investigation into the feasibility of monitoring a call centre using an emotion recognition system

Stoop, Werner 04 June 2010 (has links)
In this dissertation a method for the classification of emotion in speech recordings made in a customer service call centre of a large business is presented. The problem addressed here is that customer service analysts at large businesses have to listen to large numbers of call centre recordings in order to discover customer service-related issues. Since recordings where the customer exhibits emotion are more likely to contain useful information for service improvement than “neutral” ones, being able to identify those recordings should save a lot of time for the customer service analyst. MTN South Africa agreed to provide assistance for this project. The system that has been developed for this project can interface with MTN’s call centre database, download recordings, classify them according to their emotional content, and provide feedback to the user. The system faces the additional challenge that it is required to classify emotion notwith- standing the fact that the caller may have one of several South African accents. It should also be able to function with recordings made at telephone quality sample rates. The project identifies several speech features that can be used to classify a speech recording according to its emotional content. The project uses these features to research the general methods by which the problem of emotion classification in speech can be approached. The project examines both a K-Nearest Neighbours Approach and an Artificial Neural Network- Based Approach to classify the emotion of the speaker. Research is also done with regard to classifying a recording according to the gender of the speaker using a neural network approach. The reason for this classification is that the gender of a speaker may be useful input into an emotional classifier. The project furthermore examines the problem of identifying smaller segments of speech in a recording. In the typical call centre conversation, a recording may start with the agent greeting the customer, the customer stating his or her problem, the agent performing an action, during which time no speech occurs, the agent reporting back to the user and the call being terminated. The approach taken by this project allows the program to isolate these different segments of speech in a recording and discard segments of the recording where no speech occurs. This project suggests and implements a practical approach to the creation of a classifier in a commercial environment through its use of a scripting language interpreter that can train a classifier in one script and use the trained classifier in another script to classify unknown recordings. The project also examines the practical issues involved in implementing an emotional clas- sifier. It addresses the downloading of recordings from the call centre, classifying the recording and presenting the results to the customer service analyst. AFRIKAANS : n Metode vir die klassifisering van emosie in spraakopnames in die oproepsentrum van ’n groot sake-onderneming word in hierdie verhandeling aangebied. Die probleem wat hierdeur aangespreek word, is dat kli¨entediens ontleders in ondernemings na groot hoeveelhede oproepsentrum opnames moet luister ten einde kli¨entediens aangeleenthede te identifiseer. Aangesien opnames waarin die kli¨ent emosie toon, heel waarskynlik nuttige inligting bevat oor diensverbetering, behoort die vermo¨e om daardie opnames te identifiseer vir die analis baie tyd te spaar. MTN Suid-Afrika het ingestem om bystand vir die projek te verleen. Die stelsel wat ontwikkel is kan opnames vanuit MTN se oproepsentrum databasis verkry, klassifiseer volgens emosionele inhoud en terugvoering aan die gebruiker verskaf. Die stelsel moet die verdere uitdaging kan oorkom om emosie te kan klassifiseer nieteenstaande die feit dat die spreker een van verskeie Suid-Afrikaanse aksente het. Dit moet ook in staat wees om opnames wat gemaak is teen telefoon gehalte tempos te analiseer. Die projek identifiseer verskeie spraak eienskappe wat gebruik kan word om ’n opname volgens emosionele inhoud te klassifiseer. Die projek gebruik hierdie eienskappe om die algemene metodes waarmee die probleem van emosie klassifisering in spraak benader kan word, na te vors. Die projek gebruik ’n K-Naaste Bure en ’n Neurale Netwerk benadering om die emosie van die spreker te klassifiseer. Navorsing is voorts gedoen met betrekking tot die klassifisering van die geslag van die spreker deur ’n neurale netwerk. Die rede vir hierdie klassifisering is dat die geslag van die spreker ’n nuttige inset vir ’n emosie klassifiseerder mag wees. Die projek ondersoek ook die probleem van identifisering van spraakgedeeltes in ’n opname. In ’n tipiese oproepsentrum gesprek mag die opname begin met die agent wat die kli¨ent groet, die kli¨ent wat sy of haar probleem stel, die agent wat ’n aksie uitvoer sonder spraak, die agent wat terugrapporteer aan die gebruiker en die oproep wat be¨eindig word. Die benadering van hierdie projek laat die program toe om hierdie verskillende gedeeltes te isoleer uit die opname en om gedeeltes waar daar geen spraak plaasvind nie, uit te sny. Die projek stel ’n praktiese benadering vir die ontwikkeling van ’n klassifiseerder in ’n kommersi¨ele omgewing voor en implementeer dit deur gebruik te maak van ’n programeer taal interpreteerder wat ’n klassifiseerder kan oplei in een program en die opgeleide klassifiseerder gebruik om ’n onbekende opname te klassifiseer met behulp van ’n ander program. Die projek ondersoek ook die praktiese aspekte van die implementering van ’n emosionele klassifiseerder. Dit spreek die aflaai van opnames uit die oproep sentrum, die klassifisering daarvan, en die aanbieding van die resultate aan die kli¨entediens analis, aan. Copyright / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
113

Evaluierung von AJAX-basierten frameworks für das Web 2.0

Langer, André 20 April 2007 (has links)
„Remote Scripting“-Anwendungen erleben seit einigen Jahren einen regelrechten Anfrageboom. Während aus usability-Sicht bisher eine strikte Unterscheidung zwischen Desktop-Anwendungen und Webapplikationen herrschte, finden sich seit einiger Zeit zunehmend Angebote im World Wide Web, die diese strikte Trennung verwischen lassen. Interaktive Nutzerdialoge, nebenläufige Prozessabarbeitung und visuelle Unterstützungsmittel wie Drag & Drop- Effekte halten auf Webseiten Einzug, die dem Nutzer bisher nur aus eigenständigen Softwareprodukten in einer spezifischen Betriebssystemumgebung bekannt waren. Viele dieser neuen Anwendungs- und Interaktionsmöglichkeiten im weltweiten Datennetz werden inzwischen unter dem Oberbegriff Web 2.0 zusammengefasst. Für den Nutzer bringt dieser neue Entwicklungstrend viele Vorteile: Ansprechende, intuitive Nutzerführungen ohne die Notwendigkeit, eine ganze Internetseite bei jedem Interaktionsschritt neu zu laden und ohne bemerkbaren zeitlichen Overhead. Was für den Nutzer Erleichterung bringen soll, bedeutet häufig für einen Programmierer zunächst Mehraufwand. Eine Technik zur Realisierung solcher so genannten Rich Internet Applications, die sich in den letzten beiden Jahren immer mehr in den Vordergrund gedrängt hat, wird unter der Bezeichnung AJAX zusammengefasst. Einen einheitlichen Standard gibt es dabei nicht, sodass fast täglich neue AJAX-basierte frameworks veröffentlicht werden, die dem Programmierer (wenigstens einen Teil der) Komplexität der Programmflusssteuerung abnehmen sollen. Aufgabe der Studienarbeit soll es daher sein, das inzwischen unüberschaubar gewordene Angebot an AJAX frameworks zu systematisieren und einen Überblick über Vor- und Nachteile ausgewählter Programmbibliotheken zu geben. Dafür ist ein Kriterienkatalog zu erarbeiten, der eine Bewertung der verschiedenen frameworks nach unterschiedlichen Gesichtspunkten ermöglicht. Besonderer Schwerpunkt ist dabei auf Kriterien aus Programmierersicht (Sprachunabhängigkeit, Overhead, Implementierungsmöglichkeiten,…) und Anwendersicht (Plattformanforderungen, Einarbeitungszeit, Ergebnisqualität, …) zu legen. Auf den Kriterienkatalog ist anschließend eine Auswahl an bereits existierenden, frei verfügbaren AJAX frameworks anzuwenden, die als zukünftig relevant eingeschätzt werden. Die Ergebnisse sind abschließend in einer Gesamtübersicht zu präsentieren, die eine objektive Empfehlung für Nutzer darstellen soll, die vor der Wahl stehen, welche AJAX Programmbibliothek sie zukünftig einsetzen sollten.
114

Jaroslavice – sídlo v krajině / Jaroslavice – place in the landscape

Šmejkal, Jiří January 2018 (has links)
The theme of this diploma thesis is the architectural study of the complex of the Farm of 3D Printers in Jaroslavice. The technology of 3D printing at its speed of development has far outweighed the response to its needs. It lacks a new systematically planned building typology corresponding to the requirements of farms. Farms adapt to the spaces. The main aim of the work is to introduce the possibility of turning the situation and adapting the premises to the farms. The thesis follows the urban design of the restructuring of the Jaroslavice landscape elaborated in the previous semester. The project respects established principles at microregion level in the form of emphasis on self-sufficiency, population integrity or the use of current technologies. The land is located on the southern part of Jaroslavice. There are 3 agricultural buildings located on the property, which until 2010, when a photovoltaic power plant was built, operated in conjunction with a neighboring agricultural court. After the power plant was built, the bonds were irreversibly broken. Buildings are in a very poor condition and mutual cooperation no longer works. The existing solution replaces and shows the possibility of using solar energy in a different way. Thus, the construction cartridge works with a hybrid typology where the 3D production area is combined with the maximum solar gains of the photovoltaic panels. Generative methods have been used to design dominantly either for finding a form in terms of achieving maximum solar gains or after verifying the efficiency of the structure. The proposal has several scenarios of possible development. There are four different stages of growth and the linkage of production areas. Printers are able to replicate themselves at such a rate that they can expect rapid growth. The proposal uses controlled growth methods to simulate complex development under the conditions of maximum solar radiation. Visual distraction and overheating are also solved by atypical sunsets on the exterior façade. Thin-film photovoltaic panels are used on the sun, so it is able to produce electricity besides the shield. The energy-efficient shape along with the great advantage of the layout solution, instead of the corridor disposition, is a basic cell on the central plan view. This makes it possible to control and operate more of the machines more efficiently. The production site forwards counts full robot automation.
115

The Art of Perl: How a Scripting Language (inter)Activated the World Wide Web

Gomez, Norberto, Jr. 17 April 2013 (has links)
In 1987, computer programmer and linguist Larry Wall authored the general-purpose, high-level, interpreted, dynamic Unix scripting language, Perl. Borrowing features from C and awk, Perl was originally intended as a scripting language for text-processing. However, with the rising popularity of the Internet and the advent of Tim Berners-Lee’s World Wide Web (Web), in the 1990s, Perl soon became the glue-language for the Internet, due in large part to its relationship to the Hypertext Transfer Protocol (HTTP) and the Common Gateway Interface (CGI). Perl was the go-to language for on the fly program writing and coding, gaining accolades from the likes of publisher Tim O’Reilly and hackers alike. Perl became a favorite language of amateur Web users, whom net artist Olia Lialina calls barbarians, or the indigenous. These users authored everything from database scripts to social spaces like chatrooms and bulletin boards. Perl, while largely ignored today, played a fundamental role in facilitating those social spaces and interactions of Web 1.0, or what I refer to as a Perl-net. Thus, Perl informed today’s more ubiquitous digital culture, referred to as Web 2.0, and the social web. This project examines Perl’s origin which is predicated on postmodern theories, such as deconstructionism and multiculturalism. Perl’s formal features are differentiated from those of others, like Java. In order to defend Perl’s status as an inherently cultural online tool, this project also analyzes many instances of cultural artifacts: script programs, chatrooms, code poetry, webpages, and net art. This cultural analysis is guided by the work of contemporary media archaeologists: Lialina and Dragan Espenschied, Erkki Huhtamo and Jussi Parikka. Lastly, the present state of digital culture is analyzed in an effort to re-consider the Perl scripting language as a relevant, critical computer language, capable of aiding in deprogramming the contemporary user.
116

Video game 'Underland', and, thesis 'Playable stories : writing and design methods for negotiating narrative and player agency'

Wood, Hannah January 2016 (has links)
Creative Project Abstract: The creative project of this thesis is a script prototype for Underland, a crime drama video game and digital playable story that demonstrates writing and design methods for negotiating narrative and player agency. The story is set in October 2006 and players are investigative psychologists given access to a secure police server and tasked with analysing evidence related to two linked murders that have resulted in the arrest of journalist Silvi Moore. The aim is to uncover what happened and why by analysing Silvi’s flat, calendar of events, emails, texts, photos, voicemail, call log, 999 call, a map of the city of Plymouth and a crime scene. It is a combination of story exploration game and digital epistolary fiction that is structured via an authored fabula and dynamic syuzhet and uses the Internal-Exploratory and Internal-Ontological interactive modes to negotiate narrative and player agency. Its use of this structure and these modes shows how playable stories are uniquely positioned to deliver self-directed and empathetic emotional immersion simultaneously. The story is told in a mixture of enacted, embedded, evoked, environmental and epistolary narrative, the combination of which contributes new knowledge on how writers can use mystery, suspense and dramatic irony in playable stories. The interactive script prototype is accessible at underlandgame.com and is a means to represent how the final game is intended to be experienced by players. Thesis Abstract: This thesis considers writing and design methods for playable stories that negotiate narrative and player agency. By approaching the topic through the lens of creative writing practice, it seeks to fill a gap in the literature related to the execution of interactive and narrative devices as a practitioner. Chapter 1 defines the key terms for understanding the field and surveys the academic and theoretical debate to identify the challenges and opportunities for writers and creators. In this it departs from the dominant vision of the future of digital playable stories as the ‘holodeck,’ a simulated reality players can enter and manipulate and that shapes around them as story protagonists. Building on narratological theory it contributes a new term—the dynamic syuzhet—to express an alternate negotiation of narrative and player agency within current technological realities. Three further terms—the authored fabula, fixed syuzhet and improvised fabula—are also contributed as means to compare and contrast the narrative structures and affordances available to writers of live, digital and live-digital hybrid work. Chapter 2 conducts a qualitative analysis of digital, live and live-digital playable stories, released 2010–2016, and combines this with insights gained from primary interviews with their writers and creators to identify the techniques at work and their implications for narrative and player agency. This analysis contributes new knowledge to writing and design approaches in four interactive modes—Internal-Ontological, Internal-Exploratory, External-Ontological and External-Exploratory—that impact on where players are positioned in the work and how the experiential narrative unfolds. Chapter 3 shows how the knowledge developed through academic research informed the creation of a new playable story, Underland; as well as how the creative practice informed the academic research. Underland provides a means to demonstrate how making players protagonists of the experience, rather than of the story, enables the coupling of self-directed and empathetic emotional immersion in a way uniquely available to digital playable stories. It further shows how this negotiation of narrative and player agency can use a combination of enacted, embedded, evoked, environmental and epistolary narrative to employ dramatic irony in a new way. These findings demonstrate ways playable stories can be written and designed to deliver the ‘traditional’ pleasure of narrative and the ‘newer’ pleasure of player agency without sacrificing either.
117

Správa vývojové dokumentace přes WWW II / Administration of development documetation over WWW II

Gregárek, Ondřej January 2008 (has links)
Document server is a web application controllable by way of web browser. It is meant to serve for the management of development documentation. The application is divided to the four basic parts: Requirements, Products, Tests and Test Run. The section Requirements serves for inserting requirements for products. Product is produced on the basis of these needs and registered in part Products. Test setting is created in the part Tests according to requirements from the part Requirements. Particular products are then tested. The part Test Run registers records of these tests. These are parts of the application: management of users, connecting supplements to records, printing and exportation of data to different formats, saving history of records, filtration and sorting of entries, etc. All the data is saved in the database MySQL. The application is written in scripting language PHP. Data is presented by template system Smarty. The output is in language XHTML. Cascading style CSS is used to formatting. This work describes development of the application. First it is dealing with the proposal of the database, connection and structure of particular tables. The function of the programme is explained in detail at the same time, which is essential for the correct proposal of the database. The application is based on the database. The selected structure of files and relations of scripts to library functions are shown. The template system and the interface for access of the programme to the database are explained. The most attention is paid to the description of solving important functions of the application, i.e. listing of records, their pagination, filtration, sorting and operation with them: saving, browsing, copying, confirmation and work with history, category and problems by upkeep of consistent tree and export of data to various formats. It is always outlined the problem, the idea of solving and the description of appropriate scripts. Samples of source code are also included for better understanding of complicated algorithms.
118

GIS-based Episode Reconstruction Using GPS Data for Activity Analysis and Route Choice Modeling / GIS-based Episode Reconstruction Using GPS Data

Dalumpines, Ron 26 September 2014 (has links)
Most transportation problems arise from individual travel decisions. In response, transportation researchers had been studying individual travel behavior – a growing trend that requires activity data at individual level. Global positioning systems (GPS) and geographical information systems (GIS) have been used to capture and process individual activity data, from determining activity locations to mapping routes to these locations. Potential applications of GPS data seem limitless but our tools and methods to make these data usable lags behind. In response to this need, this dissertation presents a GIS-based toolkit to automatically extract activity episodes from GPS data and derive information related to these episodes from additional data (e.g., road network, land use). The major emphasis of this dissertation is the development of a toolkit for extracting information associated with movements of individuals from GPS data. To be effective, the toolkit has been developed around three design principles: transferability, modularity, and scalability. Two substantive chapters focus on selected components of the toolkit (map-matching, mode detection); another for the entire toolkit. Final substantive chapter demonstrates the toolkit’s potential by comparing route choice models of work and shop trips using inputs generated by the toolkit. There are several tools and methods that capitalize on GPS data, developed within different problem domains. This dissertation contributes to that repository of tools and methods by presenting a suite of tools that can extract all possible information that can be derived from GPS data. Unlike existing tools cited in the transportation literature, the toolkit has been designed to be complete (covers preprocessing up to extracting route attributes), and can work with GPS data alone or in combination with additional data. Moreover, this dissertation contributes to our understanding of route choice decisions for work and shop trips by looking into the combined effects of route attributes and individual characteristics. / Dissertation / Doctor of Philosophy (PhD)

Page generated in 0.035 seconds