• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Retail Buyers Saleability Judgements: A Comparison of Merchandise Categories

Stone, Linda C. (Linda Carol) 08 1900 (has links)
The purpose of this study was to investigate the saleability judgements of retail store buyers of women's and men's wear. A sample of 81 women's and men's wear buyers, representing two specialty stores and one mass merchandiser, was sent questionnaires. Principal Components Factor Analysis with Varimax Rotation was used to reduce the number of product, vendor and information source variables to eight factors. Three significant differences existed between the women's wear and men's wear buyers, verifying that not all retail buyers are alike. Results will benefit educators in preparing students to become more effective buyers, retail management can incorporate this same information into a buyer training program and apparel manufacturers can use the study in planning product strategies to retailers.
2

Analýza prodejnosti vozidel / Analysis of a Vehicle Saleability

Hasmanová, Sabina Bohdana January 2014 (has links)
The object of this thesis is to focus on the major influences of vehicle saleability in Czech Republic in terms of the most important criteria, such as price, fuel consumption, colour etc. Part of the work will be arranged questionnaires. The survey indicates desirability of cars back for the last five years. This creates sales trend with a hint of future development.
3

Jobzentrisches Monitoring in Verteilten Heterogenen Umgebungen mit Hilfe Innovativer Skalierbarer Methoden

Hilbrich, Marcus 24 June 2015 (has links) (PDF)
Im Bereich des wissenschaftlichen Rechnens nimmt die Anzahl von Programmläufen (Jobs), die von einem Benutzer ausgeführt werden, immer weiter zu. Dieser Trend resultiert sowohl aus einer steigenden Anzahl an CPU-Cores, auf die ein Nutzer zugreifen kann, als auch durch den immer einfacheren Zugriff auf diese mittels Portalen, Workflow-Systemen oder Services. Gleichzeitig schränken zusätzliche Abstraktionsschichten von Grid- und Cloud-Umgebungen die Möglichkeit zur Beobachtung von Jobs ein. Eine Lösung bietet das jobzentrische Monitoring, das die Ausführung von Jobs transparent darstellen kann. Die vorliegende Dissertation zeigt zum einen Methoden mit denen eine skalierbare Infrastruktur zur Verwaltung von Monitoring-Daten im Kontext von Grid, Cloud oder HPC (High Performance Computing) realisiert werden kann. Zu diesem Zweck wird sowohl eine Aufgabenteilung unter Berücksichtigung von Aspekten wie Netzwerkbandbreite und Speicherkapazität mittels einer Strukturierung der verwendeten Server in Schichten, als auch eine dezentrale Aufbereitung und Speicherung der Daten realisiert. Zum anderen wurden drei Analyseverfahren zur automatisierten und massenhaften Auswertung der Daten entwickelt. Hierzu wurde unter anderem ein auf der Kreuzkorrelation basierender Algorithmus mit einem baumbasierten Optimierungsverfahren zur Reduzierung der Laufzeit und des Speicherbedarfs entwickelt. Diese drei Verfahren können die Anzahl der manuell zu analysierenden Jobs von vielen Tausenden, auf die wenigen, interessanten, tatsächlichen Ausreißer bei der Jobausführung reduzieren. Die Methoden und Verfahren zur massenhaften Analyse, sowie zur skalierbaren Verwaltung der jobzentrischen Monitoring-Daten, wurden entworfen, prototypisch implementiert und mittels Messungen sowie durch theoretische Analysen untersucht. / An increasing number of program executions (jobs) is an ongoing trend in scientific computing. Increasing numbers of available compute cores and lower access barriers, based on portal-systems, workflow-systems, or services, drive this trend. At the same time, the abstraction layers that enable grid and cloud solutions pose challenges in observing job behaviour. Thus, observation and monitoring capabilities for large numbers of jobs are lacking. Job-centric monitoring offers a solution to present job executions in a transparent manner. This dissertation presents methods for scalable infrastructures that handle monitoring data of jobs in grid, cloud, and HPC (High Performance Computing) solutions. A layer-based organisation of servers with a distributed storage scheme enables a task sharing that respects network bandwidths and data capacities. Additionally, three proposed automatic analysis techniques enable an evaluation of huge data quantities. One of the developed algorithms is based on cross-correlation and uses a tree-based optimisation strategy to decrease both runtime and memory usage. These three methods are able to significantly reduce the number of jobs for manual analysis from many thousands to a few interesting jobs that exhibit outlier-behaviour during job execution. Contributions of this thesis include a design, a prototype implementation, and an evaluation for methods that analyse large amounts of job-data, as well for the scalable storage concept for such data.
4

Jobzentrisches Monitoring in Verteilten Heterogenen Umgebungen mit Hilfe Innovativer Skalierbarer Methoden

Hilbrich, Marcus 24 March 2015 (has links)
Im Bereich des wissenschaftlichen Rechnens nimmt die Anzahl von Programmläufen (Jobs), die von einem Benutzer ausgeführt werden, immer weiter zu. Dieser Trend resultiert sowohl aus einer steigenden Anzahl an CPU-Cores, auf die ein Nutzer zugreifen kann, als auch durch den immer einfacheren Zugriff auf diese mittels Portalen, Workflow-Systemen oder Services. Gleichzeitig schränken zusätzliche Abstraktionsschichten von Grid- und Cloud-Umgebungen die Möglichkeit zur Beobachtung von Jobs ein. Eine Lösung bietet das jobzentrische Monitoring, das die Ausführung von Jobs transparent darstellen kann. Die vorliegende Dissertation zeigt zum einen Methoden mit denen eine skalierbare Infrastruktur zur Verwaltung von Monitoring-Daten im Kontext von Grid, Cloud oder HPC (High Performance Computing) realisiert werden kann. Zu diesem Zweck wird sowohl eine Aufgabenteilung unter Berücksichtigung von Aspekten wie Netzwerkbandbreite und Speicherkapazität mittels einer Strukturierung der verwendeten Server in Schichten, als auch eine dezentrale Aufbereitung und Speicherung der Daten realisiert. Zum anderen wurden drei Analyseverfahren zur automatisierten und massenhaften Auswertung der Daten entwickelt. Hierzu wurde unter anderem ein auf der Kreuzkorrelation basierender Algorithmus mit einem baumbasierten Optimierungsverfahren zur Reduzierung der Laufzeit und des Speicherbedarfs entwickelt. Diese drei Verfahren können die Anzahl der manuell zu analysierenden Jobs von vielen Tausenden, auf die wenigen, interessanten, tatsächlichen Ausreißer bei der Jobausführung reduzieren. Die Methoden und Verfahren zur massenhaften Analyse, sowie zur skalierbaren Verwaltung der jobzentrischen Monitoring-Daten, wurden entworfen, prototypisch implementiert und mittels Messungen sowie durch theoretische Analysen untersucht. / An increasing number of program executions (jobs) is an ongoing trend in scientific computing. Increasing numbers of available compute cores and lower access barriers, based on portal-systems, workflow-systems, or services, drive this trend. At the same time, the abstraction layers that enable grid and cloud solutions pose challenges in observing job behaviour. Thus, observation and monitoring capabilities for large numbers of jobs are lacking. Job-centric monitoring offers a solution to present job executions in a transparent manner. This dissertation presents methods for scalable infrastructures that handle monitoring data of jobs in grid, cloud, and HPC (High Performance Computing) solutions. A layer-based organisation of servers with a distributed storage scheme enables a task sharing that respects network bandwidths and data capacities. Additionally, three proposed automatic analysis techniques enable an evaluation of huge data quantities. One of the developed algorithms is based on cross-correlation and uses a tree-based optimisation strategy to decrease both runtime and memory usage. These three methods are able to significantly reduce the number of jobs for manual analysis from many thousands to a few interesting jobs that exhibit outlier-behaviour during job execution. Contributions of this thesis include a design, a prototype implementation, and an evaluation for methods that analyse large amounts of job-data, as well for the scalable storage concept for such data.
5

Vliv koeficientu redukce na zdroj ceny na výsledný index odlišnosti při komparativní metodě oceňování nemovitostí / The price source reducing coefficient impact on total index of dissimilarity by the real estate valuation comparative method

Cupal, Martin Unknown Date (has links)
True market prices of real estates, unlike bid prices, are often hard to reach. Nevertheless, this information is necessary for many direct and indirect real estate market subjects, especially for valuation purposes. Therefore the bid prices of concrete real estates are often used, but they are not generally equivalent market prices. And so it´s necessary to find some way to convert bid prices to market prices. This dissertation thesis shows definite approach to this issue. Market price and bid price rate is estimated by multi-dimensional linear regression model and non-linear estimations of simple regression. Multi-dimensional linear regression model estimates the values of this rate from other variables, like supply duration, price line according to localities and other. Non-linear estimations of regression function were used for the trend bid and market price modelling in dependence on number of the population in various localities.
6

Support consumers' rights in DRM : a secure and fair solution to digital license reselling over the Internet

Gaber, Tarek January 2012 (has links)
Consumers of digital contents are empowered with numerous technologies allowing them to produce perfect copies of these contents and distribute them around the world with little or no cost. To prevent illegal copying and distribution, a technology called Digital Rights Management (DRM) is developed. With this technology, consumers are allowed to access digital contents only if they have purchased the corresponding licenses from license issuers. The problem, however, is that those consumers are not allowed to resell their own licenses- a restriction that goes against the first-sale doctrine. Enabling a consumer to buy a digital license directly from another consumer and allowing the two consumers to fairly exchange the license for a payment are still an open issue in DRM research area. This thesis investigates existing security solutions for achieving digital license reselling and analyses their strengths and weaknesses. The thesis then proposes a novel Reselling Deal Signing (RDS) protocol to achieve fairness in a license reselling. The idea of the protocol is to integrate the features of the concurrent signature scheme with functionalities of a License Issuer (LI). The security properties of this protocol is informally analysed and then formally verified using ATL logic and the model checker MOCHA. To assess its performance, a prototype of the RDS protocol has been developed and a comparison with related protocols has been conducted. The thesis also introduces two novel digital tokens a Reselling Permission (RP) token and a Multiple Reselling Permission (MRP) token. The RP and MRP tokens are used to show whether a given license is single and multiple resalable, respectively. Moreover, the thesis proposes two novel methods supporting fair and secure digital license reselling. The first method is the Reselling Deal (RD) method which allows a license to be resold once. This method makes use of the existing distribution infrastructure, RP, License Revocation List (LRL), and three protocols: RDS protocol RD Activation (RDA) protocol, and RD Completion (RDC) protocol. The second method is a Multiple License Reselling (MLR) method enabling one license to be resold N times by N consumers. The thesis presents two variants of the MLR method: RRP-MR (Repeated RP-based Multi-Reselling) and HC-MR (Hash Chain-based Multi-Reselling). The RRP-MR method is designed such that a buyer can choose to either continue or stop a multi-reselling of a license. Like the RD method, the RRP-MR method makes use of RP, LI, LRL, and the RDS, RDA, and RDC protocols to achieve fair and secure reselling. The HC-MR method allows multiple resellings while keeping the overhead on LI at a minimum level and enable a buyer to check how many times a license can be further resold. To do so, the HC-MR utilises MRP and the hash chain cryptographic primitive along with LRL, LI and the RDS, RDA and RDC protocols. The analysis and the evaluation of these three methods have been conducted. While supporting the license reselling, the two methods are designed to prevent a reseller from (1) continuing using a resold license, (2) reselling a non-resalable license, and (3) reselling one license a unauthorised number of times. In addition, they enable content owners of resold contents to trace a buyer who has violated any of the usage rights of a license bought from a reseller. Moreover, the methods enable a buyer to verify whether a license he is about to buy is legitimate for re-sale. Furthermore, the two methods support market power where a reseller can maximise his profit and a buyer can minimise his cost in a reselling process. In comparison with related works, our solution does not make use of any trusted hardware device, thus it is more cost-effective, while satisfying the interests of both resellers and buyers, and protecting the content owner's rights.

Page generated in 0.0432 seconds