• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 176
  • 39
  • 19
  • 13
  • 8
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 345
  • 345
  • 85
  • 70
  • 61
  • 50
  • 49
  • 47
  • 42
  • 38
  • 38
  • 38
  • 37
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Visual Analytics of Big Data from Molecular Dynamics Simulation

Rajendran, Catherine Jenifer Rajam 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Protein malfunction can cause human diseases, which makes the protein a target in the process of drug discovery. In-depth knowledge of how protein functions can widely contribute to the understanding of the mechanism of these diseases. Protein functions are determined by protein structures and their dynamic properties. Protein dynamics refers to the constant physical movement of atoms in a protein, which may result in the transition between different conformational states of the protein. These conformational transitions are critically important for the proteins to function. Understanding protein dynamics can help to understand and interfere with the conformational states and transitions, and thus with the function of the protein. If we can understand the mechanism of conformational transition of protein, we can design molecules to regulate this process and regulate the protein functions for new drug discovery. Protein Dynamics can be simulated by Molecular Dynamics (MD) Simulations. The MD simulation data generated are spatial-temporal and therefore very high dimensional. To analyze the data, distinguishing various atomic interactions within a protein by interpreting their 3D coordinate values plays a significant role. Since the data is humongous, the essential step is to find ways to interpret the data by generating more efficient algorithms to reduce the dimensionality and developing user-friendly visualization tools to find patterns and trends, which are not usually attainable by traditional methods of data process. The typical allosteric long-range nature of the interactions that lead to large conformational transition, pin-pointing the underlying forces and pathways responsible for the global conformational transition at atomic level is very challenging. To address the problems, Various analytical techniques are performed on the simulation data to better understand the mechanism of protein dynamics at atomic level by developing a new program called Probing Long-distance interactions by Tapping into Paired-Distances (PLITIP), which contains a set of new tools based on analysis of paired distances to remove the interference of the translation and rotation of the protein itself and therefore can capture the absolute changes within the protein. Firstly, we developed a tool called Decomposition of Paired Distances (DPD). This tool generates a distance matrix of all paired residues from our simulation data. This paired distance matrix therefore is not subjected to the interference of the translation or rotation of the protein and can capture the absolute changes within the protein. This matrix is then decomposed by DPD using Principal Component Analysis (PCA) to reduce dimensionality and to capture the largest structural variation. To showcase how DPD works, two protein systems, HIV-1 protease and 14-3-3 σ, that both have tremendous structural changes and conformational transitions as displayed by their MD simulation trajectories. The largest structural variation and conformational transition were captured by the first principal component in both cases. In addition, structural clustering and ranking of representative frames by their PC1 values revealed the long-distance nature of the conformational transition and locked the key candidate regions that might be responsible for the large conformational transitions. Secondly, to facilitate further analysis of identification of the long-distance path, a tool called Pearson Coefficient Spiral (PCP) that generates and visualizes Pearson Coefficient to measure the linear correlation between any two sets of residue pairs is developed. PCP allows users to fix one residue pair and examine the correlation of its change with other residue pairs. Thirdly, a set of visualization tools that generate paired atomic distances for the shortlisted candidate residue and captured significant interactions among them were developed. The first tool is the Residue Interaction Network Graph for Paired Atomic Distances (NG-PAD), which not only generates paired atomic distances for the shortlisted candidate residues, but also display significant interactions by a Network Graph for convenient visualization. Second, the Chord Diagram for Interaction Mapping (CD-IP) was developed to map the interactions to protein secondary structural elements and to further narrow down important interactions. Third, a Distance Plotting for Direct Comparison (DP-DC), which plots any two paired distances at user’s choice, either at residue or atomic level, to facilitate identification of similar or opposite pattern change of distances along the simulation time. All the above tools of PLITIP enabled us to identify critical residues contributing to the large conformational transitions in both HIV-1 protease and 14-3-3σ proteins. Beside the above major project, a side project of developing tools to study protein pseudo-symmetry is also reported. It has been proposed that symmetry provides protein stability, opportunities for allosteric regulation, and even functionality. This tool helps us to answer the questions of why there is a deviation from perfect symmetry in protein and how to quantify it.
272

Data-Driven Operator Behavior Visualization : Developing a Prototype for Wheel Loader / Datadriven visualisering av operatörsbeteende : Utveckling av en prototyp för hjullastare

Tian, Huahua January 2022 (has links)
To realize key business capabilities and secure long-term growth, Volvo Construction Equipment (Volvo CE) set out to define a vision for digital transformation. The latest trends in AI-powered smart electronics open up endless opportunities to help Volvo CE's operators use Wheel Loaders – Construction machines to increase productivity. To ensure operators are working in a way that delivers optimum fuel efficiency and productivity to achieve optimum results on-site, the company aspires to create visual tools to keep track of operator behavior in the operator environment. Monitor operator behavior with key indicators then visualized to inform how this affects important results for the customers and for Volvo CE. The audience is operators themselves, and internal staff like UX engineers and Product owners. Data-driven concept design (DDCD) is a decision-making approach that heavily relies on collected data and highlights the need to proactively plan and design. It is a popular approach to capturing tacit customer needs and makes a great contribution to data visualization design. Also, an emerging concept like the digital twin provides inspired ideas in data visualization conceptual design. However, little research is on the DDCD for data visualization. Thus, this work aims to explore appropriate data visualization techniques under the DDCD framework. The result is to help Volvo CE, primarily via data visualization, keep track of operator behaviors, and how these affect wheel loader productivity and energy efficiency data on different levels and in a wider context. To carry out, A series of DDCD cases for the improvement of wheel loader operator behaviors are researched and designed, to present data in a clear and concise visual way for both internal audience and operator training. As the result, a prototype containing a series of visualization techniques is proposed for two target groups and corresponding application scenarios including coaching and aid decision-making. Created a series of dashboards with expected functionalities based on understanding the current machine. The prototype for the internal audience has functionality: site and time selection, weekly overview window, phase selection, cycle thread trace, insight window, data presentation, and toolbox. The prototype for operator training has functionality: site and time selection, opponent selection, phase selection, cycle thread trace, external data window, individual comparison section, and insights block. / För att förverkliga viktiga affärsmöjligheter och säkra långsiktig tillväxt har Volvo Construction Equipment (Volvo CE) tagit fram en vision för digital omvandling. De senaste trenderna inom AIdriven smart elektronik öppnar oändliga möjligheter att hjälpa Volvo CE:s operatörer att använda hjullastare - anläggningsmaskiner för att öka produktiviteten. För att säkerställa att förarna arbetar på ett sätt som ger optimal bränsleeffektivitet och produktivitet för att uppnå optimala resultat på plats strävar företaget efter att skapa visuella verktyg för att hålla koll på förarens beteende i förarmiljön. Övervaka operatörens beteende med nyckelindikatorer som sedan visualiseras för att informera om hur detta påverkar viktiga resultat för kunderna och för Volvo CE. Målgruppen är operatörerna själva och intern personal som UX-ingenjörer och produktägare. Datadriven konceptdesign (DDCD) är en beslutsmetod som i hög grad bygger på insamlade data och belyser behovet av proaktiv planering och design. Det är ett populärt tillvägagångssätt för att fånga upp tysta kundbehov och ger ett stort bidrag till design av datavisualisering. Dessutom ger ett framväxande koncept som den digitala tvillingen inspirerande idéer för konceptuell utformning av datavisualisering. Det finns dock lite forskning om DDCD för datavisualisering. Det här arbetet syftar därför till att utforska lämpliga datavisualiseringstekniker inom ramen för DDCD. Resultatet är att hjälpa Volvo CE, främst via datavisualisering, att hålla koll på förarnas beteenden och hur dessa påverkar data om hjullastares produktivitet och energieffektivitet på olika nivåer och i ett större sammanhang. För att genomföra, En serie DDCD-fall för förbättring av beteenden hos hjullastarförare undersöks och utformas, för att presentera data på ett tydligt och kortfattat visuellt sätt för både intern publik och förarutbildning. Som resultat föreslås en prototyp som innehåller en serie visualiseringstekniker för två målgrupper och motsvarande tillämpningsscenarier, inklusive coaching och stöd för beslutsfattande. Skapade en serie instrumentpaneler med förväntade funktioner baserat på förståelse av den nuvarande maskinen. Prototypen för den interna målgruppen har följande funktioner: val av plats och tid, fönster för veckoöversikt, val av fas, spårning av cykeltråd, insiktsfönster, datapresentation och verktygslåda. Prototypen för operatörsutbildning har följande funktioner: val av plats och tid, val av motståndare, val av fas, spårning av cykeltråd, fönster för externa data, avsnitt för individuella jämförelser och block för insikter.
273

Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

Steed, Chad A 13 December 2008 (has links)
A global transformation is being fueled by unprecedented growth in the quality, quantity, and number of different parameters in environmental data through the convergence of several technological advances in data collection and modeling. Although these data hold great potential for helping us understand many complex and, in some cases, life-threatening environmental processes, our ability to generate such data is far outpacing our ability to analyze it. In particular, conventional environmental data analysis tools are inadequate for coping with the size and complexity of these data. As a result, users are forced to reduce the problem in order to adapt to the capabilities of the tools. To overcome these limitations, we must complement the power of computational methods with human knowledge, flexible thinking, imagination, and our capacity for insight by developing visual analysis tools that distill information into the actionable criteria needed for enhanced decision support. In light of said challenges, we have integrated automated statistical analysis capabilities with a highly interactive, multivariate visualization interface to produce a promising approach for visual environmental data analysis. By combining advanced interaction techniques such as dynamic axis scaling, conjunctive parallel coordinates, statistical indicators, and aerial perspective shading, we provide an enhanced variant of the classical parallel coordinates plot. Furthermore, the system facilitates statistical processes such as stepwise linear regression and correlation analysis to assist in the identification and quantification of the most significant predictors for a particular dependent variable. These capabilities are combined into a unique geovisual analytics system that is demonstrated via a pedagogical case study and three North Atlantic tropical cyclone climate studies using a systematic workflow. In addition to revealing several significant associations between environmental observations and tropical cyclone activity, this research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.
274

In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets

Dutta, Soumya 17 September 2018 (has links)
No description available.
275

[pt] INFOGRAFIA NO BRASIL: PANORAMA DE UMA LINGUAGEM MULTIMODAL / [en] INFOGRAPHICS IN BRAZIL: A PANORAMA OF A MULTIMODAL LANGUAGE

DANIEL MOURA NOGUEIRA 21 November 2019 (has links)
[pt] Esta tese aborda o tema da linguagem infográfica, um dos produtos do Design da Informação, no Brasil. Os infográficos são amplamente usados como ferramenta de comunicação pela mídia, com o intuito de transmitir informações de modo sintético, claro e atraente por meio de representações visuais diagramáticas multimodais, onde coexistem diversas modalidades comunicativas, que combinam linguagem verbal e não verbal. A pesquisa analisa o persistente problema da nomenclatura na infografia, bem como tipologias relevantes. Propõe uma linha do tempo da evolução dos infográficos, traça um breve histórico da infografia no Brasil e examina sua participação na maior premiação de infografia mundial (Malofiej) ao longo dos últimos 26 anos. Investiga e analisa dados acerca do ensino da infografia no país, observando a presença de disciplinas ligadas ao tema nos cursos superiores de jornalismo e design. Os aportes teóricos se encontram nas obras de Cairo, De Pablos, Sancho, Tufte, Moraes e Kanno, que desenvolveram investigações acerca da linguagem infográfica e visualização da informação. Foram realizadas pesquisas bibliográficas, documentais e entrevistas junto a profissionais premiados internacionalmente e professores, como forma de mapear conceitos, pressupostos teóricos e questões relacionadas ao atual cenário da infografia. O trabalho visou reunir dados que permitam a construção de um diagnóstico da infografia no Brasil na esfera acadêmica e sua práxis, algo não revelado de forma integrada por pesquisas anteriores, bem como oferecer uma reflexão acerca do papel da infografia e sua perspectiva como ferramenta informativa no cenário contemporâneo nacional. Como resultado, elaborou-se uma série de infográficos que traçam um panorama da infografia brasileira, de modo a disponibilizar dados para a comunidade de infografistas, jornalistas, designers e professores do campo do Design da Informação. / [en] This thesis addresses the theme of infographic language, one of the products of Information Design, in Brazil. Infographics are widely used as a communication tool by the media, with the purpose of transmitting information in a synthetic, clear and attractive way through multimodal diagrammatic visual representations, combining verbal and nonverbal language. The research analyzes the persistent problem of nomenclature of the infographics, as well as relevant typologies. It proposes a timeline of the evolution of infographics, traces a brief history of infographics in Brazil, and examines its participation in the biggest infographics award worldwide (Malofiej) over the last 26 years. It analyzes data on the teaching of infographics in the country, observing the presence of disciplines related to the subject in journalism and design graduations. The theoretical contributions are found in the works of Cairo, De Pablos, Sancho, Tufte, Moraes and Kanno, who have developed research on infographics and information visualization. Bibliographical and documentary research as well as interviews with awarded professionals and professors were carried out as a way of mapping concepts, theoretical assumptions and issues related to the current infographic scenario. The aim of this study was to gather data that allow the construction of a diagnosis of infographics in Brazil in the academic sphere and its praxis, something not revealed in an integrated way by previous researches, as well as offer a reflection about the role of infographics and its perspective as an information tool. As a result, a series of infographics were created, providing data to the community of journalists, designers and professors in the field of Information Design.
276

Establishing a Standardised National Data System for Evaluating Road Maintenance Emissions in Sweden

Mahmud, Z N M Zarif, Salem, Sajid January 2024 (has links)
A trustworthy and consistent data system is crucial for monitoring and reducing carbon emissions from road maintenance operations. Developing a national data reporting system requires technical support and a systemic plan involving multiple stakeholders to implement the standard. In Sweden, Trafikverket, the Swedish transport agency, recently initiated a project that proposed a solution based on the BEAst standard and outlined the current data collection methods for road maintenance. The BEAst standard is an agreed industry-driven information standard that promotes machine-readable information communication, effectively reduces costs, and increases efficiency by streamlining communication within the industry. This is to address the critical need for a trustworthy data system to monitor and reduce carbon emissions from road maintenance operations. Although the datasystem has high potential to identify the sources of carbon emissions and create mitigation measures by precisely gathering fuel use data throughout operations and maintenance activities. There are many challenges in integrating data from diverse sources into a consistent system revealed several obstacles, including differences in CO2 emissions reported by different systems, human factors affecting data quality,and limited access to cloud services. To address these challenges, this study proposes a new data reporting mechanism which requires a detailed specification of reporting parameters covering content, format, resolution, and reporting frequency using BEAst standards.
277

Understanding data requirements for Digital Twin visualization : A multi-departmental analysis in a manufacturing environment

Da Cunha Lira Ferreira, Carolina January 2024 (has links)
Enhancing operational efficiency and competitiveness in modern manufacturing environment requires the incorporation of Industry 4.0 technology. The Digital Twin is one of its enablers, and it is a transformative tool that can be used to optimize systems, processes, and real assets by using virtual models synchronized with real-time data. However, it can be difficult to fully utilize the potential benefits of the massive volumes of data companies generate. By tailoring Digital Twins to the unique data requirements of various user profiles inside companies, this study seeks to overcome this difficulty and enable efficient data access and well-informed decision-making. This study, which was carried out at Robert Bosch España Madrid, aimed to comprehend the unique data requirements of several departments, such as Engineering, Maintenance, and Production. To learn more about department-specific data needs, questionnaires and interviews were used as part of a qualitative research project. The creation of a customized Digital Twin visualizer prototype for the USS6 shopfloor was influenced by these findings. The research findings indicated some differences in the data needs of departments, highlighting the significance of preserving unique profiles in the Digital Twin Visualizer while encouraging cooperation and synergy between departments. Production requires real-time key performance indicator (KPI) monitoring, including cycle time and other production KPIs. The Maintenance department needs to track equipment maintenance events, Mean Time to Repair (MTTR), and Mean Time Between Failures (MTBF). Engineering requires data more related to machines values, status and performance. Most importantly, these findings have significant implications outside of the sensor manufacturing industry; they offer insightful knowledge that is applicable to many different industries. Organizations across diverse industries can enhance their operational performance and decision-making capacities by customizing best practices to suit their unique settings through the application of broader insights gained from these findings. This knowledge-sharing across industries is essential to pushing Industry 4.0 adoption and promoting organizational performance in the digital age. In conclusion, this study enhances knowledge on customized Digital Twin implementations and emphasizes how data-driven insights and well-informed decision-making can be leveraged to create operational excellence across industries. / För att öka den operativa effektiviteten och konkurrenskraften i moderna tillverkningsmiljöer krävs att Industri 4.0-tekniken införlivas. Den Digital Twin (digital tvilling) är ett av verktygen för detta, och det är ett transformativt verktyg som kan användas för att optimera system, processer och egendomar genom att använda virtuella modeller som synkroniseras med realtidsdata. Det kan dock vara svårt att fullt ut utnyttja de potentiella fördelarna med den enorma datavolym som företag genererar. Genom att skräddarsy Digital Twins efter de unika databehov som olika användarprofiler inom företag har, försöker denna studie övervinna denna svårighet och tillhåta effektiv datatillgång och välinformerat beslutsfattande. Målet med den här studien, som genomfördes på Robert Bosch España Madrid, var att förstå de unika databehov på, till exempel, Tekniksavdelningen, Underhållsavdelningen och Tillverksningsavdelningen. För att få veta mer om avdelningsspecifika databehov användes enkäten och intervjuer som delar av ett kvalitativt forskningsprojekt. Dessa resultat påverkade skapandet av en skräddarsydd prototyp av Digital Twin-visualiseraren för USS6:s verkstadsgolv. Forskningsresultaten visade skillnader i avdelningarnas databehov, vilket belyser vikten av att bevara unika profiler i Digital Twin Visualizer och samtidigt som uppmana avdelningarna att samarbeta i synergi mellan avdelningarna. Tillverkningsavdelningen behöver övervakning av KPI:er (Key Performance Indicators, nyckeltal) i realtid, inklusive genomslopptid och andra tillverknings-KPI:er. Underhållsavdelningen behöver overväkning av underhållshändelser för utrustningen, MTTR (eng. Mean Time To Repair, genomsnittlig tid för reparation) och MTBF (eng. Mean Time Between Failures, medeltid mellan fel). Teknikavdelningen behöver data som är relaterade till maskinernas värden, status och prestanda. Dessa resultat har betydande konsekvenser utanför sensortillverkningsindustrin; de erbjuder insiktsfull kunskap som är tillämplig på många olika branscher. Organisationer i olika branscher kan förbättra sina operativa resultat och sin beslutstagande förmåga genom att anpassa bästa praxis till sina unika miljöer med hjälp av de bredare insikter som dessa resultat ger. Detta kunskapsutbyte mellan branscher är avgörande för att driva på införandet av Industri 4.0 och främja organisationers prestanda i den digitala tidsåldern. Till sist ökar denna studie kunskapen om skräddarsydda implementeringar av Digital Twin och betonar hur datadrivna insikter och välinformerat beslutsfattande kan utnyttjas för att skapa operativ excellens i olika branscher.
278

Användargränssnitt för visualiseringav fabriksdata

Mhjazi, Khaled, Dakkeh, Hind January 2024 (has links)
This thesis was conducted to develop a user interface for the visualization of factory data through HMS Networks' Gateway Communicator. The objective of the project was to enhance decision-making in industrial processes through efficient data transfer and visualization. The project involved the design and implementation of a user-friendly and customizable interface that integrates real-time data from industrial machinery. To achieve these goals, the Angular framework was used in conjunction with HTML, CSS, and TypeScript to create a dynamic and responsive user interface. The methodology included user-centered design, ensuring that the final product not only met technical specifications but was also intuitive for end-users. Among the features implemented were dynamic components that allowed for the visualization of critical data such as temperature and pressure through customizable widgets and dialogue components. The results of the project included successful performance testing, unit tests, and usability tests that ensured the system's effectiveness in visualizing data in a manner understandable to users. The conclusion of the work is that an effective user interface can increase operational efficiency through improved data visualization and interaction, which in turn can lead to more informed decisions in industrial environments. / Detta examensarbete utfördes för att utveckla ett användargränssnitt för visualisering av fabriksdata med hjälp av HMS Networks Gateway Communicator. Syftet med projektet var att förbättra beslutsfattandet i industriella processer med hjälp av visualisering av data. Projektet innefattade design och implementering av ett användarvänligt och anpassningsbart gränssnitt som integrerar realtidsdata från industriella maskiner. För att uppnå dessa mål användes Angular ramverket tillsammans med HTML, CSS och TypeScript för att skapa ett dynamiskt användargränssnitt. Metodiken inkluderade användarcentrerad design, vilket säkerställde att slutprodukten inte bara uppfyllde tekniska specifikationer utan också var intuitiv för slutanvändarna. Bland funktionerna som implementerades finns dynamiska komponenter som tillåter visualisering av kritiska data som temperatur och tryck genom anpassningsbara komponenter. Resultaten från projektet inkluderade ett framgångsrikt prestandatest, enhetstester och användbarhetstest som säkerställde systemets effektivitet i att visualisera data på ett sätt som är begripligt för användarna. Slutsatsen av detta arbete är att ett effektivt användargränssnitt främjar datavisualisering och interaktivitet, vilket i sin tur bidrar till att fatta välinformerade beslut inom industriella miljöer.
279

Data integration between clinical research and patient care: A framework for context-depending data sharing and in silico predictions

Hoffmann, Katja, Pelz, Anne, Karg, Elena, Gottschalk, Andrea, Zerjatke, Thomas, Schuster, Silvio, Böhme, Heiko, Glauche, Ingmar, Roeder, Ingo 16 January 2025 (has links)
The transfer of new insights from basic or clinical research into clinical routine is usually a lengthy and time-consuming process. Conversely, there are still many barriers to directly provide and use routine data in the context of basic and clinical research. In particular, no coherent software solution is available that allows a convenient and immediate bidirectional transfer of data between concrete treatment contexts and research settings. Here, we present a generic framework that integrates health data (e.g., clinical, molecular) and computational analytics (e.g., model predictions, statistical evaluations, visualizations) into a clinical software solution which simultaneously supports both patient-specific healthcare decisions and research efforts, while also adhering to the requirements for data protection and data quality. Specifically, our work is based on a recently established generic data management concept, for which we designed and implemented a web-based software framework that integrates data analysis, visualization as well as computer simulation and model prediction with audit trail functionality and a regulation-compliant pseudonymization service. Within the front-end application, we established two tailored views: a clinical (i.e., treatment context) perspective focusing on patient-specific data visualization, analysis and outcome prediction and a research perspective focusing on the exploration of pseudonymized data. We illustrate the application of our generic framework by two use-cases from the field of haematology/oncology. Our implementation demonstrates the feasibility of an integrated generation and backward propagation of data analysis results and model predictions at an individual patient level into clinical decision-making processes while enabling seamless integration into a clinical information system or an electronic health record.
280

Insights on poster preparation practices in life sciences

Jambor, Helena Klara 16 January 2025 (has links)
Posters are intended to spark scientific dialogue and are omnipresent at biological conferences. Guides and how-to articles help life scientists in preparing informative visualizations in poster format. However, posters shown at conferences are at present often overloaded with data and text and lack visual structure. Here, I surveyed life scientists themselves to understand how they are currently preparing posters and which parts they struggle with. Biologist spend on average two entire days preparing one poster, with half of the time devoted to visual design aspects. Most receive no design or software training and also receive little to no feedback when preparing their visualizations. In conclusion, training in visualization principles and tools for poster preparation would likely improve the quality of conference posters. This would also benefit other common visuals such as figures and slides, and improve the science communication of researchers overall.

Page generated in 0.1083 seconds