• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4712
  • 1157
  • 1
  • Tagged with
  • 5872
  • 5872
  • 5872
  • 5872
  • 5360
  • 829
  • 622
  • 551
  • 551
  • 499
  • 359
  • 325
  • 322
  • 318
  • 277
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Introducing a library for declarative list user interfaces on mobile devices

Hedbrandh Strömberg, Casper January 2020 (has links)
Developing user interfaces that consist of lists on native mobile platforms is complex. This project aims at reducing the complexity for application developers when developing dynamic interactive lists on the iOS-platform by creating an abstraction that lets the application developer write code on a higher abstraction level. The result is a library that creates an abstraction that developers can use to build list user interfaces in a declarative way.
182

Forecasting Financial Time Series through Causal and Dilated Convolutional Neural Networks

Börjesson, Lukas January 2020 (has links)
In this paper, predictions of future price movements of a major American stock index was made by analysing past movements of the same and other correlated indices. A model that has shown very good results in speech recognition was modified to suit the analysis of financial data and was then compared to a base model, restricted by assumptions made for an efficient market. The performance of any model, that is trained by looking at past observations, is heavily influenced by how the division of the data into train, validation and test sets is made. This is further exaggerated by the temporal structure of the financial data, which means that the causal relationship between the predictors and the response is dependent in time. The complexity of the financial system further increases the struggle to make accurate predictions, but the model suggested here was still able to outperform the naive base model by more than 20 percent. The model is, however, too primitive to be used as a trading system, but suitable modifications, in order to turn the model into one, will be discussed in the end of the paper.
183

A Feasibility Study of Wireless Networks for 17 and 60 GHz and the Impact of Deployment Strategies on the System Performance

Unbehaun, Matthias January 2001 (has links)
This thesis addresses the question of how to deploy the infrastructure for wireless networks carrying high data rate services. It starts from the assumption that a major part of the costs for installing wireless infrastructure is caused by antenna site acquisition, planning of network coverage and capacity as well as manpower for setting up and wiring the antenna sites. Specific attention is therefore paid to simplifying the network installation and hence reducing the overall costs. The results show that the proposed user-deployment approach, where Access Points (APs) are set up by the customers themselves, can achieve coverage and data rates comparable to pre-planned networks with properly placed and wired APs. Typical for user-deployment is that APs are set up in an ad-hoc fashion, wherever wireless access is needed and a wired backbone infrastructure is available. A number of typical usage scenarios are developed for identifying characteristic situations and places where high data-rate applications are likely to be used. Service requirements and essential technical parameters are derived and motivated, based on these scenarios. A system design is proposed, featuring an air-interface with multi-carrier modulation and slow link-adaptation. Both coverage and capacity of this system, which achieves link-layer data rates between 40 and 130Mbps, are then studied in the different usage scenarios. The user-deployment approach, a core supposition in this thesis, requires the considered networks to be operated in an unlicensed fashion. Sufficient spectrum for unlicensed wireless services is allocated around 5, 17, 24 and 60GHz. The 5GHz band has been studied thoroughly in conjunction with the development of HiperLAN/2 and IEEE 802.11. This thesis focuses on the 17 and 60GHz bands and assumes that the performance of a system operating at 24GHz can be to some extent approximated from these results. An in-depth investigation of propagation properties at 17 and 60GHz shows that achievable cell radii are rather small. Shadowing severely impairs coverage and achievable data rates of a wireless network. A large number of APs is therefore necessary for providing sufficiently high signal levels to transmit high data rates. The shadowing problem is particularly severe for ad-hoc installations. Two different deployment scenarios and their impact on the system performance are investigated: arbitrary placed, wall-mounted APs and regularly placed, ceiling-mounted APs. The first represents the user-deployment paradigm and is certainly the cheapest method; the latter requires coarse network planning and suitable wiring and will hence be more costly. Results show that both installation methods achieve comparable performance for dense infrastructures, e.g. indoor environments. Since user-deployment is simpler and cheaper, it should be preferred in this case. Sparse networks are typical for large, open buildings or outdoor areas. In that case, regular installation should be favored, since cells typically overlap very little and achieving coverage is difficult. Consequently, some form of network planning is needed. The 60GHz band is best suited for indoor applications with a dense infrastructure, since achievable cell radii are very limited. However, very high data rates and capacities can be offered due to the large amount of bandwidth allocated at 60GHz. If capacity is less important, the 17GHz band should be preferred. The better propagation characteristics allow larger cells and fewer APs are required for reliable coverage, but the attainable network capacity is limited by the rather small amount of unlicensed spectrum. User-deployment is generally suitable for indoor applications. A slightly denser infrastructure will be required to compensate for the lack of network planning, but the costs for additional hardware will be likely insignificant compared to the potential savings by avoiding coverage planning and additional wiring. / QC 20110308
184

Network Planning of Single Frequency Broadcasting Networks

Malmgren, Göran January 1996 (has links)
QC 20110308
185

Integrating BIM and GIS for 3D City Modelling : The Case of IFC and CityGML

El-Mekawy, Mohamed January 2010 (has links)
3D geoinformation has become a base for an increasing number of today’s applications. Examples of these applications are: city and urban planning, real estate management, environmental simulation, crisis and disaster management, telecommunication, facility management and others. 3D city models are presently scattered over different public and private sectors in different systems, different conceptual models, different data formats, different data schemas, different levels of detail and different quality. In addition, the potential of 3D models goes beyond visualisation of 3D objects of virtual scenes to real 3D city models. In such an environment, integration of different sources of data for building real 3D city models becomes more difficult.   3D city models are of two types, design and real world models. Design models are usually used for building industry purposes and to fulfil the requirements of maximum level of detail in the architecture, engineering and construction (AEC) industry. Real world models are geospatial information systems that represent spatial objects around us and are largely represented in GIS applications. Research efforts in the AEC industry resulted in Building Information Modelling (BIM), a process that supports information management throughout buildings’ lifecycle and is increasingly widely used in the AEC industry. Results of different integration efforts of BIM and geospatial models show that only 3D geometric information does not fulfil the integration purpose and may lead to geometrical inconsistency. Further complex semantic information is required. Therefore, this thesis focuses on the integration of the two most prominent semantic models for the representation of BIM and geospatial objects, Industry Foundation Classes (IFC) and City Geography Markup Language (CityGML), respectively.   In the integration of IFC and CityGML building models, substantial difficulties may arise in translating information from one to the other. Professionals from both domains have made significant attempts to integrate CityGML and IFC models to produce useful common applications. Most of these attempts, however, use a unidirectional method (mostly from IFC to CityGML) for the conversion process. A bidirectional method can lead to development of unified applications in the areas of urban planning, building construction analysis, homeland security, etc. The benefits of these unified applications clearly appear at the operational level (e.g. cost reduction, unified data-view), and at the strategic level (e.g. crisis management and increased analysis capabilities).   For a bidirectional method, a formal mapping between both domains is required. Researchers have suggested that harmonising semantics is currently the best approach for integration of IFC and CityGML. In this thesis, the focus is therefore on semantic integration of IFC and CityGML building models for bidirectional conversion. IFC and CityGML use different terminologies to describe the same domain and there is a great heterogeneity in their semantics. Following a design research method, the thesis proposes a more expressive reference ontology between IFC and CityGML semantic models. Furthermore, an intermediate unified building model (UBM) is proposed between IFC and CityGML that facilitates the transfer of spatial information from IFC to CityGML and vice versa. A unified model in the current study is defined as a superset model that is extended to contain all the features and objects from both IFC and CityGML building models. The conversion is a two-steps process in which a model is first converted to the unified model and then to the target model.   The result of the thesis contributes, through the reference ontology, towards a formal mapping between IFC and CityGML ontologies that allows bidirectional conversion between them. Future development of the reference ontology may be seen as the design of a meta-standard for 3D city modelling that can support applications in both domains. Furthermore, the thesis also provides an approach towards a complete integration of CityGML and IFC through the development of the UBM. The latter contribution demonstrates how different classes, attributes and relations have been considered from IFC and CityGML in the building of the UBM.   To illustrate the applicability of the proposed approach, a hospital building located in Norrtälje City, north of Stockholm, Sweden, is used as a case study. The purpose of the case study is to show how different building elements at different levels of detail can be constructed. Considering future research possibilities, the integration approach in the thesis is seen as a starting-point for developing a common database that formulates a UBM’s platform. With such a platform, data from IFC and CityGML can be automatically integrated and processed in different analyses. Other formats can also be included in further steps. Finally, the proposed approach is believed to need future research beyond the building models alone and on an implementation process for testing and verification. / QC 20110127
186

E2PM: Enclosed Portable Password Manager

Naing Oo, Aung January 2022 (has links)
Passwords have been a necessary evil for a while. Today’s computer users have multiple accounts on the internet, with each burdening the user’s memory with complex long passwords.The requirement of generating, memorising and maintaining such passwords is becoming a bottleneck that modern-day password managers try to alleviate to a certain extent. This is all good until the stored password vaults are breached on the user’s machine or third-party servers where passwords are stored get infiltrated. Hence there is a need for something more obscure and self-contained. In this research, E2PM, which stands for Enclosed Portable Password Manager, is a hardware-based password manager that aims to be self-contained, secure, and portable. These three attributes are achieved by using a live operating system that fits on a portable flash drive whose contents are encrypted using the AES-256 algorithm. E2PM can be used through a live boot or the Virtual Box application. Passwords are stored in a separate partition on E2PM’s drive, never touch the host computer’s hard disk and are strongly encrypted. E2PM intends to provide a low-cost solution using existing hardware as compared to contemporary hardware based password managers and provides backwards compatibility, which means the user needs not make any drastic changes to their application data
187

Zero-shot cross-lingual transfer learning for sentiment analysis on Swedish chat conversations

Göhl, Siri Ann January 2022 (has links)
As the field of machine learning grows, so do the publicly available datasets. However, in the field of natural language processing, datasets within specific languages and tasks can be scarce. This thesis shows the possibility of using zero-shot cross-lingual transfer learning to train a variety of machine-learning models on a strictly English dataset and then applying the models on a Swedish dataset. The task at hand is a binary sentiment analysis where the model learns to classify a text as either a personal attack or a non-personal attack. Four machine learning models are trained for this thesis, a feedforward neural network, a long short-term memory neural network, a gated re- current neural network, and an XLM-RoBERTa transformer. All models are trained or fine-tuned on an Englishdataset and tested on a Swedish dataset with an overall positive good result. This thesis shows that it is possible to use zero-shot cross-lingual transfer learning for sentiment analysis when using aligned word embeddings or a pretrained XLM- RoBERTa transformer.
188

Improving sales forecasting : A study about the usefulness of geo-positioning and sales correlation data in forecasting of grocery sales

Wahlström, Henrik January 2022 (has links)
The topic of this master thesis is to determine whether product positioning and sales correlation can improve sales forecasting of groceries. Previous studies have stated that the sales of groceries are related to their in-store placement. If this holds, it might be possible to use that relation to perform forecasting of sales. A machine learning framework is applied to perform the forecasting to fulfill this purpose. The machine learning framework consists of several supervised regression models and a neural network. The models are used to forecast sales by first considering product positioning and sales correlation and then not doing so.  One obstacle in forecasting is the need for comprehensive time series. A possible solution is to use augmented data, which was the decision in this project. However, using augmented data requires reasoning about this choice's effect on the forecasts' outcome. Other than data augmentation, re-sampling and data-cleaning are topics of this thesis.  The thesis concludes that using product positioning and sales correlation as features in machine learning models does not necessarily improve sales forecasting. Nevertheless, it is found that there are circumstances when the inclusion does improve the forecast. Those circumstances are when there are many products placed in one section and when the turnover in a section is high. More extensive studies will be needed to fully determine whether product positioning and sales correlation, in general, improve sales forecasting.
189

Secret Pitch : En ny metod för ljudsteganografi

Spennare, Erik, Jonasson, Simon January 2022 (has links)
Steganografi är en gammal men fortfarande högst aktuell teknik som används för att dölja hemlig information i annan information. Syftet med steganografi är att undanhålla för utomstående att hemlig information skickas i en till synes legitim form. I den här rapporten presenteras ämnet steganografi och ett antal olika metoder för steganografi av ljud. Vidare beskrivs hur en ny steganografimetod för ljud kan utvecklas. Denna nya metod för steganografi testas sedan och jämförs med redan etablerade metoder utifrån aspekterna kapacitet, robusthet och transparens. En bra steganografimetod kan möjliggöra för människor som har behov av säker kommunikation att enklare kommunicera på ett säkert sätt. Steganografimetoden utvecklad i den här rapporten, kallad Secret Pitch, bygger på steganografi av ljudfiler.  Metoden visade sig ha en kapacitet på 0.18% eller 2550 bit/s men en teoretisk maxkapacitet som bygger på samma metodik på 9% eller 127 500 bit/s vid utnyttjandet av hela frekvensbandet. Efter tester visade resultaten att metoden har hög kvalitet i en rad aspekter men en viss svaghet mot komprimering och lågpassfilter. Genom en undersökning kunde slutsatsen dras att metoden inte medför några hörbara artefakter och det kunde statistiskt konstateras att steganografimetoden inte medför någon hörbar påverkan på covermaterialet. Rapporten kan därför fastslå att en ny metod för steganografi har kunnat skapas och att den har egenskaper som är mätbart jämförbara med andra metoder.
190

Multivariate Time Series Forecasting in MAX IV Electron Accelerator using Predictive Maintenance / Multivariat Tidsserieprediktion med Förutsägande Underhåll i MAX IV Elektronaccelerator

Heinze, Henrik, Persson, Olof January 2022 (has links)
There are different approaches to take when it comes to maintenance of certaine quipment in environments such as production factories, manufacturing facilities and research laboratories. No type of equipment that falls in any of these categories are perfect and lack the requirement of preservation in some form. One of these approaches to take is called Predictive Maintenance has the core functionality of predicting the need for maintenance ahead of time instead of having to rely on traditional methods such as scheduled maintenance or even run-to-failure. This prediction is often done using some form of machine learning algorithm that is able to, based on the analysis of past data, develop a learned behaviour in order to see patterns and thereby predict imminent errors in the equipment in question. In this paper, a deep learning neural network using LSTM for predicting future potential errors in an electron accelerator facility is applied, developed and tested. This is shown in the form of a proof of concept, directly tied to the specific accelerator in the facility of MAX IV, just outside Lund, Sweden. The process includes developing software for data processing, deep learning neural networks and result review. The best result from the developed model that has been trained and tested on recorded unseen data from the facility and has an error detection rate of 98.17%. The proof of concept is demonstrated, which concept is that it is possible to carry out multivariate time series forecasting using predictive maintenance in MAX IV electron accelerator with tangible accuracy.

Page generated in 0.2051 seconds