Spelling suggestions: "subject:"ehe attributed"" "subject:"ehe attributes""
101 |
Seleção de atributos para mineração de processos na gestão de incidentes / Attribute selection for process mining on incident management processAmaral, Claudio Aparecido Lira do 20 March 2018 (has links)
O processo de tratamento de incidentes é o mais adotado pelas empresas, porém, ainda carece de técnicas que possam gerar estimativas assertivas para o tempo de conclusão. Este trabalho atua no estudo de um processo real, por meio de um procedimento de mineração de processos, capaz de descobrir o modelo do processo sob a forma de um sistema de transição anotado e propõe meios automatizados de escolha dos atributos que o descrevam adequadamente, de modo a gerar estimativas realistas sobre o tempo necessário para sua conclusão. A estratégia resultante da aplicação de técnicas de seleção de atributos - filtro e invólucro - é capaz de propiciar a geração de sistemas de transição anotados mais precisos e com algum grau de generalização. A solução apresentada neste trabalho representa uma melhoria na mineração de processos, no contexto específico da criação de sistemas de transição anotados e no seu uso como um gerador de estatísticas para o processo nele modelado / The incident management process is the most widely adopted by companies. However, still lacks techniques that can generate precise estimates for the completion time. This work performs a study in a real incident management process, by means of process mining, able to find out the real process model in the form of annotated transition system and propose automated means for selecting attributes that describe it accordingly, in order to generate realistic estimates of the time to conclusion. The resulting strategy of application feature selection techniques - filter and wrapper - is able to provide generation of more accurate annotated transition systems with some degree of generalization. The solution presented in this paper represents an improvement in process mining on the specific context of creation annotated transition system and its use as a statistics generator for the whole modeled process
|
102 |
Applying Attribute-Based Encryption in Two-Way Radio Talk Groups: A Feasibility StudyGough, Michael Andreas 01 May 2018 (has links)
In two-way radio systems, talk groups are used to organize communication. Some situations may call for creating a temporary talk group, but there are no straightforward ways to do this. Making a new talk group requires programming radios off-line. Temporary groups can be created, but this requires inputting radio IDs which is tedious on a radio's limited controls. By describing group members using attributes, ciphertext-policy attribute-based encryption (CP-ABE) can be used to quickly create sub-groups of a talk group. This scheme requires fewer button presses and messages sent in the new talk group are kept secret. CP-ABE can be used on deployed hardware, but performance varies with the type of embedded processor and the number of attributes used. Because radio communication is time-critical, care must be taken not to introduce too much audio delay. By using benchmark programs on a variety of single-board computers, we explore the limits of using CP-ABE on a two-way radio.
|
103 |
Interaction Testing, Fault Location, and Anonymous Attribute-Based AuthorizationJanuary 2019 (has links)
abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem.
One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows.
This work contains the first algorithm for constructing locating arrays for general values of $d$ and $t$. A randomized computational search algorithmic framework verifies if a candidate array is $(\bar{d},t)$-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both.
This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
|
104 |
Exchanges for Complex Commodities: Toward a General-Purpose System for On-Line TradingHershberger, John 20 August 2003 (has links)
The modern economy includes a variety of markets, and the Internet has opened opportunities for efficient on-line trading. Researchers have developed algorithms for various auctions, which have become a popular means for on-line sales. They have also designed algorithms for exchange-based markets, similar to the traditional stock exchange, which support fast-paced trading of rigidly standardized securities. In contrast, there has been little work on exchanges for complex nonstandard commodities, such as used cars or collectible stamps.
We propose a formal model for trading of complex goods, and present an automated exchange for a limited version of this model. The exchange allows the traders to describe commodities by multiple attributes; for example, a car buyer may specify a model, options, color, and other desirable properties. Furthermore, a trader may enter constraints on the acceptable items rather than a specific item; for example, a buyer may look for any car that satisfies certain constraints, rather than for one particular vehicle.
We present an extensive empirical evaluation of the implemented exchange, using artificial data, and then give results for two real-world markets, used cars and commercial paper. The experiments show that the system supports markets with up to 260,000 orders, and generates one hundred to one thousand trades per second.
|
105 |
Borromean: Preserving Binary Node Attribute Distributions in Large Graph GenerationsGandy, Clayton A. 25 June 2018 (has links)
Real graph datasets are important for many science domains, from understanding epidemics to modeling traffic congestion. To facilitate access to realistic graph datasets, researchers proposed various graph generators typically aimed at representing particular graph properties. While many such graph generators exist, there are few techniques for generating graphs where the nodes have binary attributes. Moreover, generating such graphs in which the distribution of the node attributes preserves real-world characteristics is still an open challenge. This thesis introduces Borromean, a graph generating algorithm that creates synthetic graphs with binary node attributes in which the attributes obey an attribute-specific joint degree distribution. We show experimentally the accuracy of the generated graphs in terms of graph size, distribution of attributes, and distance from the original joint degree distribution. We also designed a parallel version of Borromean in order to generate larger graphs and show its performance. Our experiments show that Borromean can generate graphs of hundreds of thousands of nodes in under 30 minutes, and these graphs preserve the distribution of binary node attributes within 40% on average.
|
106 |
Machine learning for automatic classification of remotely sensed dataMilne, Linda, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
As more and more remotely sensed data becomes available it is becoming increasingly harder to analyse it with the more traditional labour intensive, manual methods. The commonly used techniques, that involve expert evaluation, are widely acknowledged as providing inconsistent results, at best. We need more general techniques that can adapt to a given situation and that incorporate the strengths of the traditional methods, human operators and new technologies. The difficulty in interpreting remotely sensed data is that often only a small amount of data is available for classification. It can be noisy, incomplete or contain irrelevant information. Given that the training data may be limited we demonstrate a variety of techniques for highlighting information in the available data and how to select the most relevant information for a given classification task. We show that more consistent results between the training data and an entire image can be obtained, and how misclassification errors can be reduced. Specifically, a new technique for attribute selection in neural networks is demonstrated. Machine learning techniques, in particular, provide us with a means of automating classification using training data from a variety of data sources, including remotely sensed data and expert knowledge. A classification framework is presented in this thesis that can be used with any classifier and any available data. While this was developed in the context of vegetation mapping from remotely sensed data using machine learning classifiers, it is a general technique that can be applied to any domain. The emphasis of the applicability for this framework being domains that have inadequate training data available.
|
107 |
Effective Linear-Time Feature SelectionPradhananga, Nripendra January 2007 (has links)
The classification learning task requires selection of a subset of features to represent patterns to be classified. This is because the performance of the classifier and the cost of classification are sensitive to the choice of the features used to construct the classifier. Exhaustive search is impractical since it searches every possible combination of features. The runtime of heuristic and random searches are better but the problem still persists when dealing with high-dimensional datasets. We investigate a heuristic, forward, wrapper-based approach, called Linear Sequential Selection, which limits the search space at each iteration of the feature selection process. We introduce randomization in the search space. The algorithm is called Randomized Linear Sequential Selection. Our experiments demonstrate that both methods are faster, find smaller subsets and can even increase the classification accuracy. We also explore the idea of ensemble learning. We have proposed two ensemble creation methods, Feature Selection Ensemble and Random Feature Ensemble. Both methods apply a feature selection algorithm to create individual classifiers of the ensemble. Our experiments have shown that both methods work well with high-dimensional data.
|
108 |
Attributes and their potential to analyze and interpret 3D GPR dataBöniger, Urs January 2010 (has links)
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances.
Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys.
Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made.
In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute.
In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level.
GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses. / Geophysikalische Erkundungsmethoden haben in den vergangenen Jahrzehnten eine weite Verbreitung bei der zerstörungsfreien beziehungsweise zerstörungsarmen Erkundung des oberflächennahen Untergrundes gefunden. Im Vergleich zur Vielzahl anderer existierender Verfahrenstypen ermöglicht das Georadar (auch als Ground Penetrating Radar bezeichnet) unter günstigen Standortbedingungen Untersuchungen mit der höchsten räumlichen Auflösung. Georadar zählt zu den elektromagnetischen (EM) Verfahren und beruht als Wellenverfahren auf der Ausbreitung von hochfrequenten EM-Wellen, das heisst deren Reflektion, Refraktion und Transmission im Untergrund. Während zweidimensionale Messstrategien bereits weit verbreitet sind, steigt gegenwärtig das Interesse an hochauflösenden, flächenhaften Messstrategien, die es erlauben, Untergrundstrukturen dreidimensional abzubilden.
Ein dem Georadar prinzipiell ähnliches Verfahren ist die Reflexionsseismik, deren Hauptanwendung in der Lagerstättenerkundung liegt. Im Laufe des letzten Jahrzehnts führte der zunehmende Bedarf an neuen Öl- und Gaslagerstätten sowie die Notwendigkeit zur optimalen Nutzung existierender Reservoirs zu einer verstärkten Anwendung und Entwicklung sogenannter seismischer Attribute. Attribute repräsentieren ein Datenmaß, welches zu einer verbesserten visuellen Darstellung oder Quantifizierung von Dateneigenschaften führt die von Relevanz für die jeweilige Fragestellung sind. Trotz des Erfolgs von Attributanalysen bei reservoirbezogenen Anwendungen und der grundlegenden Ähnlichkeit von reflexionsseismischen und durch Georadar erhobenen Datensätzen haben attributbasierte Ansätze bisher nur eine geringe Verbreitung in der Georadargemeinschaft gefunden. Das Ziel dieser Arbeit ist es, das Potential von Attributanalysen zur verbesserten Interpretation von Georadardaten zu untersuchen. Dabei liegt der Schwerpunkt auf Anwendungen aus der Archäologie und dem Ingenieurwesen.
Der Erfolg von Attributen im Allgemeinen und von solchen mit Berücksichtigung von Nachbarschaftsbeziehungen im Speziellen steht in engem Zusammenhang mit der Genauigkeit, mit welcher die gemessenen Daten räumlich lokalisiert werden können. Vor der eigentlichen Attributuntersuchung wurden deshalb die Möglichkeiten zur kinematischen Positionierung in Echtzeit beim Georadarverfahren untersucht. Ich konnte zeigen, dass die Kombination von modernen selbstverfolgenden Totalstationen mit Georadarinstrumenten unter Verwendung von leistungsfähigen Funkmodems eine zentimetergenaue Positionierung ermöglicht. Experimentelle Studien haben gezeigt, dass die beiden potentiell limitierenden Faktoren - systeminduzierte Signalstöreffekte und Datenverzögerung (sogenannte Latenzzeiten) - vernachlässigt beziehungsweise korrigiert werden können.
In der Archäologie ist die Untersuchung oberflächennaher Strukturen und deren räumlicher Gestalt wichtig zur Optimierung geplanter Grabungen. Das Georadar hat sich hierbei zu einem der wohl am meisten genutzten zerstörungsfreien geophysikalischen Verfahren entwickelt. Archäologische Georadardatensätze zeichnen sich jedoch oft durch eine hohe Komplexität aus, was mit der wiederholten anthropogenen Nutzung des oberflächennahen Untergrundes in Verbindung gebracht werden kann. In dieser Arbeit konnte gezeigt werden, dass die Verwendung zweier unterschiedlicher Attribute zur Beschreibung der Variabilität zwischen benachbarten Datenspuren eine deutlich verbesserte Interpretation in Bezug auf die Fragestellung ermöglicht. Des Weiteren konnte ich zeigen, dass eine integrative Auswertung von mehreren Datensätzen (methodisch sowie bearbeitungstechnisch) zu einer fundierteren Interpretation führen kann, zum Beispiel bei komplementären Informationen der Datensätze.
Im Ingenieurwesen stellen Beschädigungen oder Zerstörungen von Versorgungsleitungen im Untergrund eine große finanzielle Schadensquelle dar. Polarisationseffekte, das heisst Änderungen der Signalamplitude in Abhängigkeit von Akquisitions- sowie physikalischen Parametern stellen ein bekanntes Phänomen dar, welches in der Anwendung bisher jedoch kaum genutzt wird. In dieser Arbeit wurde gezeigt, wie Polarisationseffekte zu einer verbesserten Interpretation verwendet werden können. Die Überführung von geometrischen und physikalischen Attributen in ein neues, so genanntes Depolarisationsattribut hat gezeigt, wie unterschiedliche Leitungstypen extrahiert und anhand ihrer Polarisationscharakteristika klassifiziert werden können.
Weitere wichtige physikalische Charakteristika des Georadarwellenfeldes können mit dem Matching Pursuit-Verfahren untersucht werden. Dieses Verfahren hatte in den letzten Jahren einen großen Einfluss auf moderne Signal- und Bildverarbeitungsansätze. Matching Pursuit wurde in der Geophysik bis jetzt hauptsächlich zur hochauflösenden Zeit-Frequenzanalyse verwendet. Anhand eines modifizierten Tree-based Matching Pursuit Algorithmus habe ich demonstriert, welche weiterführenden Möglichkeiten solche Datenzerlegungen für die Bearbeitung und Interpretation von Georadardaten eröffnen. Insgesamt zeigt diese Arbeit, wie moderne Vermessungstechniken und attributbasierte Analysestrategien genutzt werden können um dreidimensionale Daten effektiv und genau zu akquirieren beziehungsweise die resultierenden Datensätze effizient und verlässlich zu interpretieren.
|
109 |
Measuring Vertical And Horizontal Intra-industry Trade For Turkish Manufacturing Industry Over TimeSenoglu, Demet 01 September 2003 (has links) (PDF)
In traditional trade theories, foreign trade plays the role of filling the gap of products not produced within the country. However, in the early 1960s increasing exchange of similar products, intra-industry trade, in the world trade have been observed by trade theorists. After the realization of the fact that intra-industry trade has become a very important part of world trade, more comprehensive studies on intra-industry trade have been conducted. At the end of the 1970s, trade theorists started to analyze intra-industry trade between developed countries (horizontal intra-industry trade) and intra-industry trade between developed and developing countries (vertical intra-industry trade) separately, because their characteristics were different. Horizontal intra-industry models were characterized by attribute variation between products while vertical intra-industry models were characterized by quality variation.
This study investigates the issue of measurement of horizontal and vertical intra-industry trade for Turkish manufacturing industry. We address the questions of whether the intra-industry trade in Turkish manufacturing sector is more of the horizontal or the vertical type and whether the vertical industries dominates horizontal industries in number at the 3- digit industry level.
Empirical analyses shows that the majority of intra-industry trade in Turkish manufacturing sector is of the vertical nature / Turkish manufacturing
sector exports lower quality varieties in exchange for higher quality varieties. Also, our empirical analyses indicate that a large percent of 3- digit industries considered as primarily involved in intra-industry trade are vertical industries.
|
110 |
Secure Schemes for Semi-Trusted EnvironmentTassanaviboon, Anuchart January 2011 (has links)
In recent years, two distributed system technologies have emerged: Peer-to-Peer (P2P) and cloud computing. For the former, the computers at the edge of networks share their resources, i.e., computing power, data, and network bandwidth, and obtain resources from other peers in the same community. Although this technology enables efficiency, scalability, and availability at low cost of ownership and maintenance, peers defined as ``like each other'' are not wholly controlled by one another or by the same authority. In addition, resources and functionality in P2P systems depend on peer contribution, i.e., storing, computing, routing, etc. These specific aspects raise security concerns and attacks that many researchers try to address. Most solutions proposed by researchers rely on public-key certificates from an external Certificate Authority (CA) or a centralized Public Key Infrastructure (PKI). However, both CA and PKI are contradictory to fully decentralized P2P systems that are self-organizing and infrastructureless.
To avoid this contradiction, this thesis concerns the provisioning of public-key certificates in P2P communities, which is a crucial foundation for securing P2P functionalities and applications. We create a framework, named the Self-Organizing and Self-Healing CA group (SOHCG), that can provide certificates without a centralized Trusted Third Party (TTP). In our framework, a CA group is initialized in a Content Addressable Network (CAN) by trusted bootstrap nodes and then grows to a mature state by itself. Based on our group management policies and predefined parameters, the membership in a CA group is dynamic and has a uniform distribution over the P2P community; the size of a CA group is kept to a level that balances performance and acceptable security. The muticast group over an underlying CA group is constructed to reduce communication and computation overhead from collaboration among CA members. To maintain the quality of the CA group, the honest majority of members is maintained by a Byzantine agreement algorithm, and all shares are refreshed gradually and continuously. Our CA framework has been designed to meet all design goals, being self-organizing, self-healing, scalable, resilient, and efficient. A security analysis shows that the framework enables key registration and certificate issue with resistance to external attacks, i.e., node impersonation, man-in-the-middle (MITM), Sybil, and a specific form of DoS, as well as internal attacks, i.e., CA functionality interference and CA group subversion.
Cloud computing is the most recent evolution of distributed systems that enable shared resources like P2P systems. Unlike P2P systems, cloud entities are asymmetric in roles like client-server models, i.e., end-users collaborate with Cloud Service Providers (CSPs) through Web interfaces or Web portals. Cloud computing is a combination of technologies, e.g., SOA services, virtualization, grid computing, clustering, P2P overlay networks, management automation, and the Internet, etc. With these technologies, cloud computing can deliver services with specific properties: on-demand self-service, broad network access, resource pooling, rapid elasticity, measured services. However, theses core technologies have their own intrinsic vulnerabilities, so they induce specific attacks to cloud computing. Furthermore, since public clouds are a form of outsourcing, the security of users' resources must rely on CSPs' administration. This situation raises two crucial security concerns for users: locking data into a single CSP and losing control of resources. Providing inter-operations between Application Service Providers (ASPs) and untrusted cloud storage is a countermeasure that can protect users from lock-in with a vendor and losing control of their data.
To meet the above challenge, this thesis proposed a new authorization scheme, named OAuth and ABE based authorization (AAuth), that is built on the OAuth standard and leverages Ciphertext-Policy Attribute Based Encryption (CP-ABE) and ElGamal-like masks to construct ABE-based tokens. The ABE-tokens can facilitate a user-centric approach, end-to-end encryption and end-to-end authorization in semi-trusted clouds. With these facilities, owners can take control of their data resting in semi-untrusted clouds and safely use services from unknown ASPs. To this end, our scheme divides the attribute universe into two disjointed sets: confined attributes defined by owners to limit the lifetime and scope of tokens and descriptive attributes defined by authority(s) to certify the characteristic of ASPs. Security analysis shows that AAuth maintains the same security level as the original CP-ABE scheme and protects users from exposing their credentials to ASP, as OAuth does. Moreover, AAuth can resist both external and internal attacks, including untrusted cloud storage. Since most cryptographic functions are delegated from owners to CSPs, AAuth gains computing power from clouds. In our extensive simulation, AAuth's greater overhead was balanced by greater security than OAuth's. Furthermore, our scheme works seamlessly with storage providers by retaining the providers' APIs in the usual way.
|
Page generated in 0.0818 seconds