• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 11
  • 10
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

XML as a Format for Representation and Manipulation of Data from Radar Communications

Alfredsson, Anders January 2001 (has links)
XML was designed to be a new standard for marking up data on the web. However, as a result of its extensible and flexible properties, XML is now being used more and more for other purposes than was originally intended. Today XML is prompting an approach more focused on data exchange, between different applications inside companies or even between cooperating businesses. Businesses are showing interest in using XML as an integral part of their work. Ericsson Microwave Systems (EMW) is a company that sees XML as a conceivable solution to problems in the work with radar communications. An approach towards a solution based on a relational database system has earlier been analysed. In this project we present an investigation of the work at EMW, and identification and documentation of the problems in the radar communication work. Also, the requirements and expectations that EMW has on XML are presented. Moreover, an analysis has been made to decide to what extent XML could be used to solve the problems of EMW. The analysis was conducted by elucidating the problems and possibilities of XML compared to the previous approach for solving the problems at EMW, which was based on using a relational database management system. The analysis shows that XML has good features for representing hierarchically structured data, as in the EMW case. It is also shown that XML is good for data integration purposes. Furthermore, the analysis shows that XML, due to its self-describing and weak typing nature, is inappropriate to use in the data semantics and integrity problem context of EMW. However, it also shows that the new XML Schema standard could be used as a complement to the core XML standard, to partially solve the semantics problems.
12

Adaptation of Legacy Codes to Context-Aware Composition using Aspect-Oriented Programming for Data Representation Conversion

Sotsenko, Alisa January 2013 (has links)
Different computation problem domains such as sorting, matrix multiplication, etc. usually require different data representations and algorithms variants implementations in order to be adapted and re-designed to context-aware composition (CAC). Context-aware composition is a technique for the design of applications that can adapt its behavior according to changes in the program. We considered two application domains: matrix multiplication and graph algorithms (DFS algorithm in particular). The main problem in the implementation of the representation mechanisms applied in these problem domains is time spent on the data representation conversion that in the end should not influence the application performance.        This thesis work presents a flexible aspect-based architecture that includes the data structure representation adaptation in order to reduce implementation efforts required for adaptation different application domains.      Although, manual approach has small overhead 4-10% for different problems compared to the AOP-based approach, experiments show that the manual adaptation to CAC requires on average three times more programming effort in terms of lines of code than AOP-based approach. Moreover, the AOP-based approach showed the average speed-up over baseline algorithms that use standard data structures of 2.1.
13

Anomaly Detection in Log Files Using Machine Learning

Björnerud, Philip January 2021 (has links)
Logs generated by the applications, devices, and servers contain information that can be used to determine the health of the system. Manual inspection of logs is important, for example during upgrades, to determine whether the upgrade and data migration were successful. However, manual testing is not reliable enough, and manual inspection of logs is tedious and time-­consuming. In this thesis, we propose to use the machine learning techniques K­means and DBSCAN to find anomaly sequences in log files. This research also investigated two different kinds of data representation techniques, feature vector representation, and IDF representation. Evaluation metrics such as F1 score, recall, and precision were used to analyze the performance of the applied machine learning algorithms. The study found that the algorithms have large differences regarding detection of anomalies, in which the algorithms performed better in finding the different kinds of anomalous sequences, rather than finding the total amount of them. The result of the study could help the user to find anomalous sequences, without manually inspecting the log file.
14

Swapping Edges of Arbitrary Triangulations to Achieve the Optimal Order of Approximation

Chui, Charles K., Hong, Dong 01 January 1997 (has links)
In the representation of scattered data by smooth pp (:= piecewise polynomial) functions, perhaps the most important problem is to find an optimal triangulation of the given sample sites (called vertices). Of course, the notion of optimality depends on the desirable properties in the approximation or modeling problems. In this paper, we are concerned with optimal approximation order with respect to the given order r of smoothness and degree k of the polynomial pieces of the smooth pp functions. We will only consider C1 pp approximation with r = 1 and k = 4. The main result in this paper is an efficient method for triangulating any finitely many arbitrarily scattered sample sites, such that these sample sites are the only vertices of the triangulation, and that for any discrete data given at these sample sites, there is a C1 piecewise quartic polynomial on this triangulation that interpolates the given data with the fifth order of approximation.
15

Data Visualization vs Data Physicalization for Group Collaboration

Niculescu, Edina, Forslund, Matilda January 2023 (has links)
Data representation tools are commonly used as means of understanding data. However, new ways of representing data such as using physical objects can have a different advantage as well. It is not only understanding the data, which is important, but giving meaning to data to inspire change. This field, called data physicalization, is still new, meaning that limited research exists about it which made us interested in exploring it further. We chose to do this by comparing a physicalization tool with a digital representation tool. We chose to limit the scope of our study to group collaboration and investigate the advantages and disadvantages of both tools from this perspective. We found this angle interesting since most major decisions require a group to work together and the representation tools used for assistance should encourage this. We investigated this by having focus groups where participants solved problems in a group using one representation tool at a time followed by individual interviews. We observed the behavior of the participants and compared it to the answers they gave in the interviews to uncover the main advantages and disadvantages of the data visualization and data physicalization tools. The biggest advantage uncovered by our study for data visualization is the ability to sort and filter data which makes it easier to understand the data. The biggest disadvantage is that only one person at a time has control over the mouse and thus the tool, creating a hierarchical group dynamic. The biggest advantage of the physicalization tool is its dynamic nature which enables the users to interact with the data thus supporting the understanding and exploration of ideas. One of the biggest disadvantages is that data physicalization is a new research field, which results in people needing time to understand how to use it. New data representation tools can be developed based on these advantages and disadvantages. / Datarepresentationsverktyg används vanligen som ett sätt att förstå data. Men nya sätt att representera data på, såsom att använda fysiska objekt, kan ha en ytterligare fördel. Det handlar inte bara om att förstå datan, utan att ge en bättre känsla för datan för att inspirera till förändring. Detta område, som kallas fysikalisering är fortfarande nytt vilket innebär att det finns begränsad forskning om det, vilket gjorde oss intresserade av att utforska det vidare. Vi valde att göra detta genom att jämföra ett fysikaliseringsverktyg med ett digitalt representationsverktyg. Vi valde att begränsa omfattningen av vår studie till samarbete i grupp och att undersöka fördelarna och nackdelarna med båda verktygen från detta perspektiv. Vi fann denna vinkel intressant eftersom de flesta stora beslut kräver att en grupp arbetar tillsammans och att representationsverktygen som används då bör stödja detta. Detta undersöktes genom att hålla i fokusgrupper där deltagarna löste problem i grupp med ett representationsverktyg åt gången, följt av individuella intervjuer. Vi observerade deltagarnas beteende och jämförde det med svaren de gav i intervjuerna för att hitta de största fördelarna och nackdelarna med visualiserings- och fysikaliseringsverktygen. Den största fördelen för datavisualisering som hittades under vår studie är förmågan att sortera och filtrera data, vilket gör det lättare att förstå datan. Den största nackdelen är att bara en person åt gången har kontroll över datormusen och därmed verktyget, vilket skapar en hierarkisk gruppdynamik. Den största fördelen med fysikaliseringsverktyget är dess dynamiska natur som möjliggör för användarna att interagera med datan och därigenom stödja förståelsen och utforskningen av idéer. En av de största nackdelarna är att fysikalisering är ett nytt forskningsområde, vilket innebär att människor behöver tid för att förstå hur man använder det. Baserat på dessa fördelar och nackdelar kan nya datarepresentationsverktyg kan utvecklas.
16

DEVELOPMENT OF AN ONLINE CATALOG SYSTEM FOR AN AUTONOMOUS GUIDED VEHICLE USING XML AND JAVA

DHARESHWAR, RAHUL G. 11 October 2001 (has links)
No description available.
17

Re-defining data visuals for an efficient and sustainable food waste management

Singh, Suhas January 2017 (has links)
The use of visual data representation is increasing the possibilities to exchange information and communicate indifferent contexts all over the world. Communicating food wastage visually to influence consuming patterns isone of these possibilities. Food wastage is currently a much-prioritized topic in Sweden as well as globally due toits negative impacts on society, environment and the economy, and therefore there is much need to bringinnovative solutions supporting reduction of food waste. This thesis presents a qualitative research based on a casestudy of food waste management at Sala municipality in Sweden while exploring the current visual datarepresentation techniques and its further potential to make food waste management more sustainable. The researchframework used in this thesis is based on visual rhetoric and the innovation theories. The thesis analyzes foodwastage from an international perspective, its connection to sustainable development goals and how MatomaticAB uses a visual data representation tool to address food wastage.The thesis further explains how the users associated with Sala municipality interpret the existing tool, thechallenges they face and review their expectations to build a new visual data representation model. The results ofquestionnaires filled by user’s, state that 50% of the respondents understand the current tool to its full capacityand only 50% of the respondents are satisfied with the overall tool. When it comes to the choice of datapresentation 67% of the users showed interest in use of infographics instead of the conventional bar graphs, andtherefore some parameters like, making the tool more interesting using infographics, user friendly by limiting thedata displayed and interactive by giving user options to explore further as per their liking, were thought whiledesigning the new visual data representation model.
18

Data representation for fluorescence guided stereotactic brain tumor biopsies : Development and evaluation of a visual and auditory user interface

Maintz, Michaela January 2018 (has links)
Background and Objective In stereotactic brain tumor biopsies, the combination of real-time fluorescence spectroscopy with the detection of microvascular perfusion using laser Doppler flowmetry provides an improved localization of the brain tumor while decreasing the risk of intra-cranial hemorrhage. The surgeon using the measurement probe is required to view signal values on a screen or usually, when her or his visual focus is directed at the patient, the verbal feedback of a biomedical engineer who is monitoring the measurement signals is needed. In this process possible important information can be overlooked and time is lost. The aim of the thesis was the development a visual and auditory user interface (UI) for use in stereotactic brain tumor biopsies. Materials and Methods The system translates the fluorescence intensity of protoporphyrin IX (PpIX) into sound and visual indicators that are easy and fast to recognize and transmits warning signals in case of signal error or the detection of microvascular perfusion. The increasing and de-creasing fluorescence values at tumor margins were reproduced to improve the precision of de-tecting varying fluorescence intensities when entering tumor tissue with color gradient models. The algorithm produced five signal values when specific fluorescence intensities were measured and compared at different wavelengths.For the development of the UI, a user-centered design was implemented. The user-, operating room- and safety requirements were gathered by communicating with the biomedical engineers and neurosurgeons who had experience in working with fluorescence guided brain tumor biop-sies. The requirements were considered when designing the UI’s features in LabVIEW and the auditory feedback was generated using OSC (Open Sound Control). The user interface intended to deliver measurement data to the user that triggered a high response accuracy by being easy to understand while inducing high user acceptance. The user interaction and function response accuracy of the visual and auditory interface were evaluated in statistical tests where operating room situations were mimicked. The user acceptance of the UI was evaluated. Results Signals for no, low (increasing and decreasing) and high fluorescence indicators, as well as two warning indicators for a blocked signal and vessel occurrence were represented visually and auditorially by the user interface. An intensity/time graph and intensity/wavelength graph, along with the option of recording measurement files and opening saved files allowed the inspec-tion of detailed measurement values. The user study exhibited auditory response accuracy of 95 ± 3% in the intuition test and 91±16% in a memory test. The testing of the response accuracy of the individual signal values displayed accurate responses in 84% to 100% of times a signal was played back. The user acceptance rating of the auditory and visual interface showed no negative results. Conclusion A UI was developed to visually and auditorially represent measurement values to a neurosurgeon performing a stereotactic brain tumor biopsy procedure and biomedical engineers monitoring the measurement signals. The visual display was successful in representing data in a way that was easy to understand. The auditory interface showed high response accuracies for the individual tones representing measurement values. The majority of the test subjects per-ceived the signals to be intuitive, easy to understand and easy to remember. The auditory and visual UI showed high user acceptance ratings, indicating that the user interface was useful and satisfactory in its application.
19

The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations

Cheeseman, Bevan 29 March 2018 (has links) (PDF)
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
20

The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations

Cheeseman, Bevan 28 November 2017 (has links)
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.

Page generated in 0.1085 seconds