Spelling suggestions: "subject:"dgraphical ser centerface (GUI)"" "subject:"dgraphical ser 1interface (GUI)""
1 |
The evolution of software technologies to support large distributed data acquisition systemsJones, Robert John January 1997 (has links)
No description available.
|
2 |
NEXT GENERATION TELEMETRY DATA ACQUISITION WITH WINDOWS® NTHeminger, Larry J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / There is a wave of change coming. It started in the industrial automation community and it
is slowly and surely working its way into aerospace, satellite and telemetry applications.
It’s called the PC, and its not just for simple quick-look data anymore. Using state-of-the-art
commercial hardware and software technologies, PC-based architectures can now be
used to perform self contained, reliable and high performance telemetry data acquisition
and processing functions – previously the domain of expensive, dedicated front end
systems. This paper will discuss many of the key enabling technologies and will provide
examples of their use in a truly next generation system architecture based on the
Microsoft® Windows NT Operating System and related features.
|
3 |
Modeling the User Interfaces: A Component-based Interface Research for Integrating the Net-PAC Model and UMLTsai, Shuen-Jen 06 June 2002 (has links)
Graphical user interface (GUI) has become the key element of modern information systems and is commonly viewed as one of the decisive factors for the success of an information system project. To help develop effective GUIs, many tools have been introduced by software vendors to meet the needs of designing a variety of interfaces. Such modern design tools offer system developer vehicles to create sophisticated GUI with a few codes. However, the complicity of many GUIs and the varying expectations among users, designers and developers make the communication among them and the use of most prevailing design tools a real challenge. An integrated tool for better design and development of GUIs may help alleviate the problems caused by the mis-communication and the knowledge gaps existing among users, designers and developers.
In this paper, a new design tool, which integrates the GUI design techniques embedded in Unified Modeling Language (UML) and the Presentation-Abstraction-Control (PAC) model in Web environment (Net-PAC) is proposed. The potential problems of using vendor-provided design methodology will be presented. Special features of the proposed integrated tool will then be discussed. Some real-world cases using the integrated techniques will be presented to illustrate the advantages of using proposed methodology.
|
4 |
Artificial Intelligence for Graphical User Interface Design : Analysing stakeholder perspectives on AI integration in GUI development and essential characteristics for successful implementationHenriksson, Linda, Wingårdh, Anna January 2023 (has links)
In today's world, Artificial Intelligence (AI) has seamlessly integrated into ourdaily lives without us even realising it. We witness AI-driven innovations allaround us, subtly enhancing our routines and interactions. Ranging from Siri, Alexa, to Google Assistant, voice assistants have become prime examples of AI technology, assisting us with simple tasks and responding to our inquiries. As these once futuristic ideas have now become an indispensable part of our everyday reality, they also become relevant for the field of GUI. This thesis explores the views of stakeholders, such as designers, alumni, students and teachers, on the inevitable implementation of artificial intelligence(AI) into the graphical user interface (GUI) development. It aims to provide understanding on stakeholders thoughts and needs with the focus on two research questions: RQ1: What are the viewpoints of design stakeholders regarding using Artificial Intelligence tools into GUI development? And RQ2: What characteristics should be considered in including AI in GUI development? To collect data, the thesis will use A/B testing and question sessions. In the A/B testing, participants will watch two videos, one showing how to digitise asketch using an AI tool (Uizard) and the other showing how to do the samething using a traditional GUI design tool (Figma). Afterwards, the participants will answer questions about their experience regarding the two different ways to digitise a sketch. The study highlighted a generally positive outlook among the participating stakeholders. Students and alumni expressed more enthusiasm whereas experienced professionals and teachers were cautious yet open to AI integration. Concerns werevoiced regarding potential drawbacks, including limited control and issues of over-reliance. The findings underscored AI's potential to streamline tasks but also emphasised the need for manual intervention and raised questions about maintaining control and creative freedom. We hope this work serves as a valuable starting point for other researchers interested in exploring this topic.
|
5 |
A DISTRIBUTED NETWORK ARCHITECTURE FOR PC-BASED TELEMETRY SYSTEMSWindingland, Kim L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The ever-increasing power of PC hardware combined with the new operating systems
available make the PC an excellent platform for a telemetry system. For applications that
require multiple users or more processing power than a single PC, a network of PCs can be
used to distribute data acquisition and processing tasks. The focus of this paper is on a
distributed network approach to solving telemetry test applications. This approach
maximizes the flexibility and expandability of the system while keeping the initial capital
equipment expenditure low.
|
6 |
THE FUTURE OF DATA ACQUISITIONWexler, Marty 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / The necessity to acquire and analyze data dates back to the beginning of science itself. Long ago, a scientist may have run experiments and noted the results on a piece of paper. These notes became the data. The method was crude, but effective. As experiments got more complex, the need for better methodologies arose. Scientists began using computers to gather, analyze, and store the data. This method worked well for most types of data acquisition. As the amount of data being collected increased, larger computers, faster processors, and faster storage devices were used in order to keep up with the demand. This method was more refined, but still did not meet the needs of the scientific community. Requirements began to change in the data acquisition arena. More people wanted access to the data in real time. Companies producing large data acquisition systems began to move toward a network-based solution. This architecture featured a specialized computer called the server, which contained all of the data acquisition hardware. The server handled requests from multiple clients and handled the data flow to the network, data displays, and the archive medium. While this solution worked well to satisfy most requirements, it fell short in meeting others. The ability to have multiple computers working together across a local or wide area network (LAN or WAN) was not addressed. In addition, this architecture inherently had a single point of failure. If the server machine went down, all data from all sources was lost. Today, we see that the requirements for data acquisition systems include features only dreamed of five years ago. These new systems are linked around the world by wide area networks. They may include code to command satellites or handle 250 Mbps download rates. They must produce data for dozens of users at once, be customizable by the end user, and they must run on personal computers (PCs)! Systems like these cannot work using the traditional client/server model of the past. The data acquisition industry demands systems with far more features than were traditionally available. These systems must provide more reliability and interoperability, and be available at a fraction of the cost. To this end, we must use commercial-off-the-shelf (COTS) computers that operate faster than the mainframe computers of only a decade ago. These computers must run software that is smart, reliable, scalable, and easy to use. All of these requirements can be met by a network of PCs running the Windows NT operating system.
|
7 |
Construction and realisation of measurement system in a radiation field of 10 standard suns.Makineni, Anil Kumar January 2012 (has links)
A measurement system is to be presented, which is used to obtain the I-V characteristics of a solar cell and to track its temperature during irra-diation before mounting it into a complete array/module. This project presents both the design and implementation of an Electronic load for testing the solar cell under field conditions of 10000 W/m^2, which is able to provide current versus voltage and power versus voltage charac-teristics of a solar cell using a software based model developed in Lab-VIEW. An efficient water cooling method which includes a heat pipe array system is also suggested. This thesis presents the maximum power tracking of a solar cell and the corresponding voltage and current values. In addition, the design of the clamp system provides an easy means of replacing the solar cell during testing.Keywords: Solar cell, Metal Oxide Semiconductor Field Effect Transistor (MOSFET), I-V characteristics, cooling system, solar cell clamp system, LabVIEW, Graphical User Interface (GUI).
|
8 |
Ανάπτυξη προγραμματιστικού περιβάλλοντος για τη μελέτη ασύγχρονων νευρωνικών δικτύωνΑνδριακοπούλου, Ειρήνη 14 February 2012 (has links)
Εκτός από τα Τεχνητά Νευρωνικά Δίκτυα, ένα άλλο παρεμφερές πρόβλημα είναι αυτό της μοντελοποίησης των δομικών και λειτουργικών χαρακτηριστικών διαφόρων τμημάτων του Κεντρικού Νευρικού Συστήματος καθώς και των διαφόρων εγκεφαλικών λειτουργιών. Στόχος αυτής της διπλωματικής είναι η δημιουργία ενός μοντέλου του φυσιολογικού νευρώνα και της συγκρότησης νευρωνικών δικτύων που εμπλέκονται σε κάποια εγκεφαλική λειτουργία. Στην ανάπτυξη του μοντέλου λήφθηκαν υπόψη τα ιδιαίτερα νευροανατομικά χαρακτηριστικά και νευροφυσιολογικά χαρακτηριαστικά και οι ιδιότητες που σχετίζονται με τις υπό μελέτη εγκεφαλικές καταστάσεις. Επίσης διερευνήθηκε η αλληλεπίδραση και η αναπτυσσόμενη δυναμική, τόσο σε κυτταρικό επίπεδο όσο και σε συστημικό επίπεδο, καθώς και η δυναμική αλληλεπίδραση νευρωνικών δικτύων. Πραγματοποιήθηκε μακροσκοπική προσέγγιση με τη χρήση μαθηματικών μοντέλων και αναπτύχθηκε ένα GUI περιβάλλον για τη διαχείριση του προγράμματος από το χρήστη. / Apart from the Artificial Neural Networks, another similar problem is the modeling of structural and functional characteristics of different parts of the Central Nervous System and the various brain functions. The aim of this diploma is to create a model of normal neuron and the establishment of neural networks involved in some brain function. In developing the model were taken into account the specific neuroanatomical and neurophysiological characteristics and properties related to the studied brain states. We also investigated the interaction and the growing momentum, both at the cellular level and system level, and the dynamic interaction of neural networks. An macroscopic approach using mathematical models and developed a GUI environment for the management of the program by the user.
|
9 |
Reduzindo a duplicação de código em aplicações corporativas: um arcabouço baseado em padrões de renderização. / Reducing code duplication in enterprise applications: A framework based on rendering patterns.OLIVEIRA, Delano Hélio. 09 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-09T18:09:49Z
No. of bitstreams: 1
DELANO HÉLIO OLIVEIRA - DISSERTAÇÃO PPGCC 2015..pdf: 994388 bytes, checksum: 3aafc37c8cabe33b27414e66db934ccb (MD5) / Made available in DSpace on 2018-05-09T18:09:49Z (GMT). No. of bitstreams: 1
DELANO HÉLIO OLIVEIRA - DISSERTAÇÃO PPGCC 2015..pdf: 994388 bytes, checksum: 3aafc37c8cabe33b27414e66db934ccb (MD5)
Previous issue date: 2015 / O desenvolvimento de aplicações corporativas modernas para web baseia-se na utilização de padrões e ferramentas para viabilizar a rápida prototipagem, garantindo a separação entre modelo de negócio e interface gráfica de usuário (GUI, do inglês Graphical User Interface). As plataformas de Scaffold, por exemplo, permitem um aumento da produtividade dos desenvolvedores ao gerarem código a partir dos elementos do modelo conceitual. Porém,o código fonte de GUI gerado apresenta muita replicação, devido ao acoplamento ainda existente entre os componentes das telas de interface gráfica e as propriedades inerentes ao modelo conceitual da aplicação, dificultando a manutenção do software. Os padrões de renderização propostos por Welick et al. se apresentam como uma solução conceitual para este problema, através do mapeamento de metadados do modelo conceitual em componentes gráficos, organizando o código de GUI e reduzindo a replicação de código. Neste trabalho, tem-se como objetivo a criação de um arcabouço para o desenvolvimento de aplicações corporativas com arquitetura web moderna, com foco em GUI, baseado em padrões de renderização. O arcabouço permite que o desenvolvedor construa componentes de GUI sem acoplá-los aos elementos do modelo conceitual. A associação da GUI com o modelo conceitual é feita através de regras de renderização, que podem ser alteradas facilmente. O arcabouço proposto
foi validado através de um estudo de caso, no qual foi demonstrada uma redução significativa na duplicação do código quando comparada às plataformas de Scaffold. / The modern enterprise web application development is based on the use of patterns and tools to enable rapid prototyping, ensuring separation between the business model and graphical user interface (GUI). The Scaffold frameworks, for example, allows an increase in productivity of developers to generate code from the elements of domain model. However, the GUI source code generated presents a lot of replications due to coupling still extant between GUI components and properties inherent to the domain model of the application, making it difficult to maintain the software. The rendering patters proposed by Welick et al. are presented as a conceptual solution to this problem by mapping domain model metadata to graphical components, organizing the GUI code and reducing code duplication. In this Work, we have aimed to create a framework for enterprise applications development with web modern architecture, focusing on GUI, based on rendering patterns. This framework allows the developer to build GUI components without engage to elements of domain model. The GUI link with domain model is made by rendering rules, which can be changed easily. The proposed framework was validated by a case study in which it was demonstrated a significant reduction in code duplication when compared to Scaffold frameworks.
|
10 |
<b>Collaborative Human and Computer Controls of Smart Machines</b>Hussein Bilal (17565258) 07 December 2023 (has links)
<p dir="ltr">A Human-Machine Interaction (HMI) refers to a mechanism to support the direct interactions of humans and machines with the objective for the synthesis of machine intelligence and autonomy. The demand to advance in this field of study for intelligence controls is continuously growing. Brain-Computer Interface (BCI) is one type of HMIs that utilizes a human brain to enable direct communication of the human subject with a machine. This technology is widely explored in different fields to control external devices using brain signals.</p><p dir="ltr">This thesis is driven by two key observations. The first one is the limited number of Degrees of Freedom (DoF) that existing BCI controls can control in an external device; it becomes necessary to assess the controllability when choosing a control instrument. The second one is the differences of decision spaces of human and machine when both of them try to control an external device. To fill the gaps in these two aspects, there is a need to design an additional functional module that is able to translate the commands issued by human into high-frequency control commands that can be understood by machines. These two aspects has not been investigated thoroughly in literatures.</p><p dir="ltr">This study focuses on training, detecting, and using humans’ intents to control intelligent machines. It uses brain signals which will be trained and detected in form of Electroencephalography (EEG), brain signals will be used to extract and classify human intents. A selected instrument, Emotiv Epoc X, is used for pattern training and recognition based on its controllability and features among other instruments. A functional module is then developed to bridge the gap of frequency differences between human intents and motion commands of machine. A selected robot, TinkerKit Braccio, is then used to illustrate the feasibility of the developed module through fully controlling the robotic arm using human’s intents solely.</p><p dir="ltr">Multiple experiments were done on the prototyped system to prove the feasibility of the proposed model. The accuracy to send each command, and hence the accuracy of the system to extract each intent, exceeded 75%. Then, the feasibility of the proposed model was also tested through controlling the robot to follow pre-defined paths, which was obtained through designing a Graphical-User Interface (GUI). The accuracy of each experiment exceeded 90%, which validated the feasibility of the proposed control model.</p>
|
Page generated in 0.1203 seconds