• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 17
  • 11
  • 9
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Comparison of Flare Forecasting Methods. IV. Evaluating Consecutive-day Forecasting Patterns

Park, S.H., Leka, K.D., Kusano, K., Andries, J., Barnes, G., Bingham, S., Bloomfield, D.S., McCloskey, A.E., Delouille, V., Falconer, D., Gallagher, P.T., Georgoulis, M.K., Kubo, Y., Lee, K., Lee, S., Lobzin, V., Mun, J., Murray, S.A., Hamad Nageem, Tarek A.M., Qahwaji, Rami S.R., Sharpe, M., Steenburgh, R.A., Steward, G., Terkildsen, M. 21 March 2021 (has links)
No / A crucial challenge to successful flare prediction is forecasting periods that transition between "flare-quiet" and "flare-active." Building on earlier studies in this series in which we describe the methodology, details, and results of flare forecasting comparison efforts, we focus here on patterns of forecast outcomes (success and failure) over multiday periods. A novel analysis is developed to evaluate forecasting success in the context of catching the first event of flare-active periods and, conversely, correctly predicting declining flare activity. We demonstrate these evaluation methods graphically and quantitatively as they provide both quick comparative evaluations and options for detailed analysis. For the testing interval 2016-2017, we determine the relative frequency distribution of two-day dichotomous forecast outcomes for three different event histories (i.e., event/event, no-event/event, and event/no-event) and use it to highlight performance differences between forecasting methods. A trend is identified across all forecasting methods that a high/low forecast probability on day 1 remains high/low on day 2, even though flaring activity is transitioning. For M-class and larger flares, we find that explicitly including persistence or prior flare history in computing forecasts helps to improve overall forecast performance. It is also found that using magnetic/modern data leads to improvement in catching the first-event/first-no-event transitions. Finally, 15% of major (i.e., M-class or above) flare days over the testing interval were effectively missed due to a lack of observations from instruments away from the Earth-Sun line.
32

A model for the visual representation of the coherence of facts in a textual document set

Engelbrecht, Louis January 2016 (has links)
A large amount of information is contained in textual records, which originate from a variety of sources such as handwritten records and digital media like audio and video files. The information contained in these records is unstructured and to visualise the content of the records is not a trivialtask.In order to visualise information contained in unstructured textual records, the information must be extracted from the records and transformed into a structured format. This research aimed to visualise the coherence of facts contained in textual sources in order to allow the user who make use of the visualisation to make an assumption about the validity of the textual records as a set. For the purpose of the study, it was contemplated that the coherence of facts contained in a document set was indicated by the multiple occurrences of the same fact over several documents in the set. The output of this research is a model that abstracts the process required to transform information contained in unstructured textual records into a structured format and the visual representation of the multiple occurrences of facts in order to support the process of making an assumption about the coherence of facts in the set. This assumption enables the user to make a decision.based on the coherence theory of truth.about the validity of the document set. The modelprovides guidance and practices for performing tasks on similar textualdocument sets containing secondary data. The development of the model was informed by a phased construction of three specific software solution instantiations.namely an initial information extraction, an intermediate visual representation and a final information visualisation instantiation. The final solution instantiation was demonstrated to research participants and was evaluated as well. A pragmatic design science research approach was followed in order to solve the research problem. In conducting the research an adaption of the Peffers et at. (2006) design research process model was followed. The result of the research is a model for the visual representation of the coherence of facts in a textual document set. Expert review of the model is added through a process of peer review and academic scrutiny by means of conference papers and a journal article. It is envisaged that the results of the research can be applied to a number of research fields such as Indigenous Knowledge, History and Law. / School of Computing / M. Sc. (Computing)
33

Monitoring a simulace chování experimentálních terčů pro ADS, vývinu tepla a úniku neutronů / Monitoring and Simulation of ADS Experimental Target Behaviour, Heat Generation, and Neutron Leakage

Svoboda, Josef January 2021 (has links)
Urychlovačem řízené podkritické systémy (ADS) se schopností transmutovat dlouhodobě žijící radionuklidy mohou vyřešit problematiku použitého jaderného paliva z aktuálních jaderných reaktorů. Stejně tak i potenciální problém s nedostatkem dnes používaného paliva, U-235, jelikož jsou schopny energeticky využít U-238 nebo i hojný izotop thoria Th-232. Tato disertační práce se v rámci základního ADS výzkumu zabývá spalačními reakcemi a produkcí tepla různých experimentálních terčů. Experimentální měření bylo provedeno ve Spojeném ústavu jaderných výzkumů v Dubně v Ruské federaci. V rámci doktorského studia bylo v průběhu let 2015-2019 provedeno 13 experimentů. Během výzkumu byly na urychlovači Fázotron ozařovány různé terče protony s energií 660 MeV. Nejdříve spalační terč QUINTA složený z 512 kg přírodního uranu, následně pak experimentální terče z olova a uhlíku nebo terč složený z olověných cihel. Byl proveden také speciální experiment zaměřený na detailní výzkum dvou protony ozařovaných uranových válečků, z nichž je složen spalační terč QUINTA. Výzkum byl především zaměřen na monitorování uvolňovaného tepla ze zpomalovaných protonů, spalační reakce a štěpení, způsobeného neutrony produkovanými spalační reakcí. Dále se na uvolňování tepla podílely piony a fotony. Teplota byla experimentálně měřena pomocí přesných termočlánků se speciální kalibrací. Rozdíly teplot byly monitorovány jak na povrchu, tak uvnitř terčů. Další výzkum byl zaměřený na monitorování unikajících neutronů z terče porovnávací metodou mezi dvěma detektory. První obsahoval malé množství štěpného materiálu s teplotním čidlem. Druhý byl složený z neštěpného materiálu (W nebo Ta), avšak s podobnými materiálovými vlastnostmi se stejnými rozměry. Unik neutronů (resp. neutronový tok mimo experimentální terč) byl detekován uvolněnou energií ze štěpné reakce. Tato práce se zabývá přesným měřením změny teploty pomocí termočlánků, s využitím elekroniky od National Instrument a softwaru LabView pro sběr dat. Pro práci s daty, analýzu a vizualizaci dat byl použit skriptovací jazyk Python 3.7. (s využitím několika knihoven). Přenos částic by simulován pomocí MCNPX 2.7.0., a konečně simulace přenosu tepla a určení povrchové teploty simulovaného modelu bylo provedeno v programu ANSYS Fluent (pro jednodušší výpočty ANSYS Transient Thermal).
34

Text complexity visualisations : An exploratory study on teachers interpretations of radar chart visualisations of text complexity / Visualisering av textkomplexitet : En utforskande studie kring lärares tolkningar av radardiagramsvisualiseringar av textkomplexitet

Anderberg, Caroline January 2022 (has links)
Finding the appropriate level of text for students with varying reading abilities is an important and demanding task for teachers. Radar chart visualisations of text complexity could potentially be an aid in that process, but they need to be evaluated to see if they are intuitive and if they say something about the complexity of a text. This study explores how visualisations of text complexity, in the format of radar charts, are interpreted, what measures they should include and what information they should contain in order to be intelligible for teachers who work with people who have language and/or reading diffi- culties. A preliminary study and three focus group sessions were conducted with teachers from special education schools for adults and gymnasium level. Through thematic analysis of the sessions, five themes were generated and it was found that the visualisations are intelligible to some extent, but they need to be adapted to the target group by making sure the measures are relevant, and that the scale, colours, categories and measures are clearly explained. / Det är både en viktig och krävande uppgift för lärare att hitta lämplig textnivå för elever med varierande läsförmågor. Radardiagramsvisualiseringar av textkomplexitet kan poten- tiellt stötta den processen, men de måste utvärderas för att undersöka om de är intuitiva, vilka mått som bör inkluderas samt om de säger något om komplexiteten av en text. Den här studien utforskar hur visualiseringar av textkomplexitet i form av radardiagram tolkas, vilka mått de bör inkludera samt vilken information de bör innehålla i syfte att vara begripliga för lärare som jobbar med elever med språk och/eller lässvårigheter. En förundersökning och tre fokusgruppsessioner utfördes, med lärare från särgymnasium och särvuxskolor. Efter tematisk analys av data från fokusgrupperna genererades fem teman. Reultaten visade att visualiseringarna var begripliga till viss del, men de behöver anpassas till målgruppen genom att se till att måtten är relevanta samt att skalan, färgerna, kategorierna och måtten är tydligt förklarade.
35

Visualization of E-commerce Transaction Data : USING BUSINESS INTELLIGENCE TOOLS

Safari, Arash January 2015 (has links)
Customer Value(CV) is a data analytics company experiencing problems presenting the result of their analytics in a satisfiable manner. As a result, they considered the use of a data visualization and business intelligence softwares. The purpose of such softwares are to, amongst other things, virtually represent data in an interactive and perceptible manner to the viewer. There are however a large number of these types of applications on the market, making it hard for companies to find the one that best suits their purposes. CV is one such company, and this report was done on behalf of them with the purpose to identify the software best fitting their specific needs. This was done by conducting case studies on specifically chosen softwares and comparing the results of the studies.The software selection process was based largely on the Magic Quadrant report by Gartner, which contains a general overview of a subset of business intelligence softwares available on the market. The selected softwares were Qlik view, Qlik sense, GoodData, panorama Necto, DataWatch, Tableau and SiSense. The case studies focused mainly on aspects of the softwares that were of interest to CV, namely thesoftwares data importation capabilities, data visualization options, the possibilities of updating the model based on underlying data changes, options available regarded sharing the created presentations and the amount of support offered by the software vendor. Based on the results of the case studies, it was concluded that SiSense was the software that best satisfied the requirements set by CV. / Customer Value(CV) är ett företag som upplever svårigheter med att presentera resultaten av deras dataanalys på ett tillfredsställande sätt. De överväger nu att att använda sig av datavisualisering och Business Intelligence program för att virtuellt representera data på ett interaktivt sätt. Det finns däremot ett stort antal olika typer av sådanna applikationer på marknaden, vilket leder till svårigheter för företag att hitta den som passar dem bäst. CV är ett sådant företag, och detta rapport var skriven på deras begäran med syftet att identifieraden datavisualisations- eller Business Intelligence programmet, som bäst passar deras specifika behov. Detta gjordes med hjälp av en serie fallstudier som utfördes på specifikt valda mjukvaror, för att sedan jämföra resultaten av dessa studier.Valprocessen av dessa mjukvaror var i stora drag baserad på "Magic Quadrant 2015" rapporten av Gartner, som innehåller en generell och överskådlig bild av business intelligence applikationsmarknaden. De applikationer som valdes för evaluering varQlik view, Qlik sense, GoodData, panorama Necto, DataWatch, Tableau och SiSense. Fallstudierna fokuserade främst på aspekter av mjukvaran som var av intresse till CV, nämligen deras dataimportationsförmåga, datavisualiseringsmöjligheter, möjligheter till att uppdatera modellen baserad på ändringar i den underliggande datastrukturen, exporeteringsmöjligheter av de skapade presentationerna och den mängd dokumentation och support som erbjöds av mjukvaroutgivaren. Baserad på resultaten av fallstudierna drogs slutsatsen att SiSense var applikationen som bäst täckte CVs behov.
36

Using UX design principles for comprehensive data visualisation / Tillämpning av UX designprinciper för omfattande datavisualisering

Ali, Umar, Sulaiman, Rabi January 2023 (has links)
Workplace safety, particularly in manual handling tasks, is a critical concern that hasbeen increasingly addressed using advanced risk assessment tools. However, pre-senting the complex results of these assessments in an easily digestible format re-mains a challenge. This thesis focused on designing and developing a user-friendlyweb application to visualise risk assessment data effectively. Grounded in a robusttheoretical framework that combines user experience principles, and data visualisa-tion techniques. The study employed an iterative, user-centric design process to de-velop the web application. Multiple visualisation methods, such as pie charts for vis-ualising risk distribution, bar chart, and line chart for time-based analysis, were eval-uated for their effectiveness through usability testing. The application's primary con-tribution lies in its efficient data visualisation techniques, aimed at simplifying com-plex datasets into actionable insights. This work lays the groundwork enabling futuredevelopment by pinpointing areas for improvement like enhanced interactivity andaccessibility. / Belastningsergonomiska risker i arbetsmiljön, särskilt i uppgifter som involverarmanuell hantering, är en kritisk fråga som alltmer har adresserats med hjälp av avan-cerade riskhanteringsverktyg. Men att presentera de komplexa resultaten av dessabedömningar i ett lättillgängligt format kvarstår som en utmaning. Denna avhand-ling fokuserade på att designa och utveckla en användarvänlig webbapplikation föratt effektivt visualisera riskbedömningsdata. Studien bygger på en robust teoretiskram och kombinerar principer för användarupplevelse med datavisualiseringstekni-ker. Webbapplikationen har utvecklats med hjälp av en iterativ och användarcentre-rad designprocess. Flera visualiseringsmetoder, såsom cirkeldiagram för att visuali-sera riskfördelning, stapeldiagram och linjediagram för tidsbaserad analys, utvärde-rades för deras effektivitet genom användningstester och utvärderingsformulär. Ap-plikationens primära bidrag ligger i dess effektiva datavisualiseringstekniker, somsyftar till att förenkla komplex information till handlingsbara insikter. Detta arbetelägger grunden och möjliggör framtida utveckling genom att peka ut områden förförbättring, som förbättrad interaktivitet och tillgänglighet.
37

QUALITY ASSESSMENT OF GEDI ELEVATION DATA

Wildan Firdaus (12216200) 13 December 2023 (has links)
<p dir="ltr">As a new spaceborne laser remote sensing system, the Global Ecosystem Dynamics Investigation, or GEDI, is being widely used for monitoring forest ecosystems. However, its measurements are subject to uncertainties that will affect the calculation of ground elevation and vegetation height. This research intends to investigate the quality of the GEDI elevation data and its relevance to topography and land cover.</p><p dir="ltr">In this study, the elevation of the GEDI data is compared to 3DEP DEM, which has a higher resolution and accuracy. All the experiments in this study are conducted for two locations with vastly different terrain and land cover conditions, namely Tippecanoe County in Indiana and Mendocino County in California. Through this investigation we expect to gain a comprehensive understanding of GEDI’s elevation quality in various terrain and land cover conditions.</p><p dir="ltr">The results show that GEDI data in Tippecanoe County has better elevation accuracy than the GEDI data in Mendocino County. GEDI in Tippecanoe County is almost four times more accurate than in Mendocino County. Regarding land cover, GEDI have better accuracy in low vegetation areas than in forest areas. The ratio can be around three times better in Tippecanoe County and around one and half times better in Mendocino County. In terms of slope, GEDI data shows a clear positive correlation between RMSE and slope. The trend indicates as slope increases, the RMSE increases concurrently. In other words, slope and GEDI elevation accuracy are inversely related. In the experiment involving slope and land cover, the results show that slope is the most influential factor to GEDI elevation accuracy.</p><p dir="ltr">This study informs GEDI users of the factors they must consider for forest biomass calculation and topographic mapping applications. When high terrain slope and/or high vegetation is present, the GEDI data should be checked with other data sources like 3DEP DEM or any ground truth measurements to assure its quality. We expect these findings can help worldwide users understand that the quality of GEDI data is variable and dependent on terrain relief and land cover.</p>
38

Analysis of Performance Parameters for Service Assurance in Radio Access Networks / Analys av prestandaparametrar för service assurance I radionätverk

Raymat, Daryell, Chaker, Mohammed January 2023 (has links)
During the thesis project, an evaluation tool was developed for Telenor. This tool identifies the most reliable cell within a site based on its standard deviation and systematically ranks the performance of each cell using key performance indicators (KPIs). The tool also calculates the mean and the median to allow the user to get an overview over the network performance. The analysis underscored the importance of robust network reliability, especially when considering the deployment of home care technologies in remote areas. The tool is designed to analyse extracted data, providing Telenor with a view of network performance and ensuring top-tier service quality in even the most remote and challenging terrains. / Under examensarbetet lades fokus på skapandet av ett utvärderingsverktyg för Telenor. Detta verktyg bestämmer den mest tillförlitliga cellen inom en basstation baserat på dess standardavvikelse för den analyserade KPI och klassificerar varje cells prestanda. Utöver det räknas medelvärdet och medianen ut för att användaren ska få en bättre översikt över nätverksprestandan. Utbyggnaden av hemvårdsteknologier i avlägsna områden tas särskilt i beaktande. Verktyget är designat för analys av data och ger en översikt över nätverkets prestanda, vilket kan bidra till att säkerställa optimal servicekvalitet.
39

The Design and Evaluation of Ambient Displays in a Hospital Environment

Koelemeijer, Dorien January 2016 (has links)
Hospital environments are ranked as one of the most stressful contemporary work environments for their employees, and this especially concerns nurses (Nejati et al. 2016). One of the core problems comprises the notion that the current technology adopted in hospitals does not support the mobile nature of medical work and the complex work environment, in which people and information are distributed (Bardram 2003). The employment of inadequate technology and the strenuous access to information results in a decrease in efficiency regarding the fulfilment of medical tasks, and puts a strain on the attention of the medical personnel. This thesis proposes a solution to the aforementioned problems through the design of ambient displays, that inform the medical personnel with the health statuses of patients whilst requiring minimal allocation of attention. The ambient displays concede a hierarchy of information, where the most essential information encompasses an overview of patients’ vital signs. Data regarding the vital signs are measured by biometric sensors and are embodied by shape-changing interfaces, of which the ambient displays consist. User-authentication permits the medical personnel to access a deeper layer within the hierarchy of information, entailing clinical data such as patient EMRs, after gesture-based interaction with the ambient display. The additional clinical information is retrieved on the user’s PDA, and can subsequently be viewed in more detail, or modified at any place within the hospital.In this thesis, prototypes of shape-changing interfaces were designed and evaluated in a hospital environment. The evaluation was focused on the interaction design and user-experience of the shape-changing interface, the capabilities of the ambient displays to inform users through peripheral awareness, as well as the remote communication between patient and healthcare professional through biometric data. The evaluations indicated that the required attention allocated for the acquisition of information from the shape-changing interface was minimal. The interaction with the ambient display, as well as with the PDA when accessing additional clinical data, was deemed intuitive, yet comprised a short learning curve. Furthermore, the evaluations in situ pointed out that for optimised communication through the ambient displays, an overview of the health statuses of approximately eight patients should be displayed, and placed in the corridors of the hospital ward.
40

VISUAL ANALYTICS OF BIG DATA FROM MOLECULAR DYNAMICS SIMULATION

Catherine Jenifer Rajam Rajendran (5931113) 03 February 2023 (has links)
<p>Protein malfunction can cause human diseases, which makes the protein a target in the process of drug discovery. In-depth knowledge of how protein functions can widely contribute to the understanding of the mechanism of these diseases. Protein functions are determined by protein structures and their dynamic properties. Protein dynamics refers to the constant physical movement of atoms in a protein, which may result in the transition between different conformational states of the protein. These conformational transitions are critically important for the proteins to function. Understanding protein dynamics can help to understand and interfere with the conformational states and transitions, and thus with the function of the protein. If we can understand the mechanism of conformational transition of protein, we can design molecules to regulate this process and regulate the protein functions for new drug discovery. Protein Dynamics can be simulated by Molecular Dynamics (MD) Simulations.</p> <p>The MD simulation data generated are spatial-temporal and therefore very high dimensional. To analyze the data, distinguishing various atomic interactions within a protein by interpreting their 3D coordinate values plays a significant role. Since the data is humongous, the essential step is to find ways to interpret the data by generating more efficient algorithms to reduce the dimensionality and developing user-friendly visualization tools to find patterns and trends, which are not usually attainable by traditional methods of data process. The typical allosteric long-range nature of the interactions that lead to large conformational transition, pin-pointing the underlying forces and pathways responsible for the global conformational transition at atomic level is very challenging. To address the problems, Various analytical techniques are performed on the simulation data to better understand the mechanism of protein dynamics at atomic level by developing a new program called Probing Long-distance interactions by Tapping into Paired-Distances (PLITIP), which contains a set of new tools based on analysis of paired distances to remove the interference of the translation and rotation of the protein itself and therefore can capture the absolute changes within the protein.</p> <p>Firstly, we developed a tool called Decomposition of Paired Distances (DPD). This tool generates a distance matrix of all paired residues from our simulation data. This paired distance matrix therefore is not subjected to the interference of the translation or rotation of the protein and can capture the absolute changes within the protein. This matrix is then decomposed by DPD</p> <p>using Principal Component Analysis (PCA) to reduce dimensionality and to capture the largest structural variation. To showcase how DPD works, two protein systems, HIV-1 protease and 14-3-3 σ, that both have tremendous structural changes and conformational transitions as displayed by their MD simulation trajectories. The largest structural variation and conformational transition were captured by the first principal component in both cases. In addition, structural clustering and ranking of representative frames by their PC1 values revealed the long-distance nature of the conformational transition and locked the key candidate regions that might be responsible for the large conformational transitions.</p> <p>Secondly, to facilitate further analysis of identification of the long-distance path, a tool called Pearson Coefficient Spiral (PCP) that generates and visualizes Pearson Coefficient to measure the linear correlation between any two sets of residue pairs is developed. PCP allows users to fix one residue pair and examine the correlation of its change with other residue pairs.</p> <p>Thirdly, a set of visualization tools that generate paired atomic distances for the shortlisted candidate residue and captured significant interactions among them were developed. The first tool is the Residue Interaction Network Graph for Paired Atomic Distances (NG-PAD), which not only generates paired atomic distances for the shortlisted candidate residues, but also display significant interactions by a Network Graph for convenient visualization. Second, the Chord Diagram for Interaction Mapping (CD-IP) was developed to map the interactions to protein secondary structural elements and to further narrow down important interactions. Third, a Distance Plotting for Direct Comparison (DP-DC), which plots any two paired distances at user’s choice, either at residue or atomic level, to facilitate identification of similar or opposite pattern change of distances along the simulation time. All the above tools of PLITIP enabled us to identify critical residues contributing to the large conformational transitions in both HIV-1 protease and 14-3-3σ proteins.</p> <p>Beside the above major project, a side project of developing tools to study protein pseudo-symmetry is also reported. It has been proposed that symmetry provides protein stability, opportunities for allosteric regulation, and even functionality. This tool helps us to answer the questions of why there is a deviation from perfect symmetry in protein and how to quantify it.</p>

Page generated in 0.5287 seconds