• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 341
  • 133
  • 67
  • 62
  • 37
  • 21
  • 19
  • 14
  • 11
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • Tagged with
  • 871
  • 219
  • 98
  • 94
  • 78
  • 72
  • 67
  • 63
  • 54
  • 51
  • 49
  • 46
  • 43
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

THE UTILIZATION OF SELECTED MANAGERIAL ACCOUNTING CONCEPTS AND TECHNIQUES IN BRANCH BANK MANAGEMENT

Sabbagh, Hashem Mohammad Ali, 1942-, Sabbagh, Hashem Mohammad Ali, 1942- January 1971 (has links)
No description available.
82

Extraction of Electromagnetic Properties of Metamaterials with Branch Compensation from Phase Tracking

Lewis, Jacob Christian January 2020 (has links)
In the field of electromagnetism, there are materials known as metamaterials which exhibit unique properties that can be exploited. Permittivity, defined as capacitance per meter, of a metamaterial can vary over frequency , time, or even be negative. This can be useful for tuning antennas, changing their operating frequency or direction of propagation, or even designing cloaking systems. However, the theory behind metamaterials needs to be studied further. One of the biggest issues to address is in determining the constitutive parameters of metamaterials which may be varying. Previous research has shown the issue of branches, or mathematical discontinuities, occurring in the derivation of permittivity from the scattering parameters of a metamaterial. This thesis provides further understanding to the theory behind these branches and presents a new method to compensate for them. This new method, called the phase tracking method, may be considered a modern adaptation of the Nicolson-Ross-Weir method.
83

Exact Approaches for Higher-Dimensional Orthogonal Packing and Related Problems

Mesyagutov, Marat 12 February 2014 (has links)
NP-hard problems of higher-dimensional orthogonal packing are considered. We look closer at their logical structure and show that they can be decomposed into problems of a smaller dimension with a special contiguous structure. This decomposition influences the modeling of the packing process, which results in three new solution approaches. Keeping this decomposition in mind, we model the smaller-dimensional problems in a single position-indexed formulation with non-overlapping inequalities serving as binding constraints. Thus, we come up with a new integer linear programming model, which we subject to polyhedral analysis. Furthermore, we establish general non-overlapping and density inequalities and prove under appropriate assumptions their facet-defining property for the convex hull of the integer solutions. Based on the proposed model and the strong inequalities, we develop a new branch-and-cut algorithm. Being a relaxation of the higher-dimensional problem, each of the smaller-dimensional problems is also relevant for different areas, e.g. for scheduling. To tackle any of these smaller-dimensional problems, we use a Gilmore-Gomory model, which is a Dantzig-Wolfe decomposition of the position-indexed formulation. In order to obtain a contiguous structure for the optimal solution, its basis matrix must have a consecutive 1's property. For construction of such matrices, we develop new branch-and-price algorithms which are distinguished by various strategies for the enumeration of partial solutions. We also prove some characteristics of partial solutions, which tighten the slave problem of column generation. For a nonlinear modeling of the higher-dimensional packing problems, we investigate state-of-the-art constraint programming approaches, modify them, and propose new dichotomy and intersection branching strategies. To tighten the constraint propagation, we introduce new pruning rules. For that, we apply 1D relaxation with intervals and forbidden pairs, an advanced bar relaxation, 2D slice relaxation, and 1D slice-bar relaxation with forbidden pairs. The new rules are based on the relaxation by the smaller-dimensional problems which, in turn, are replaced by a linear programming relaxation of the Gilmore-Gomory model. We conclude with a discussion of implementation issues and numerical studies of all proposed approaches. / Es werden NP-schwere höherdimensionale orthogonale Packungsprobleme betrachtet. Wir untersuchen ihre logische Struktur genauer und zeigen, dass sie sich in Probleme kleinerer Dimension mit einer speziellen Nachbarschaftsstruktur zerlegen lassen. Dies beeinflusst die Modellierung des Packungsprozesses, die ihreseits zu drei neuen Lösungsansätzen führt. Unter Beachtung dieser Zerlegung modellieren wir die Probleme kleinerer Dimension in einer einzigen positionsindizierten Formulierung mit Nichtüberlappungsungleichungen, die als Bindungsbedingungen dienen. Damit entwickeln wir ein neues Modell der ganzzahligen linearen Optimierung und unterziehen dies einer Polyederanalyse. Weiterhin geben wir allgemeine Nichtüberlappungs- und Dichtheitsungleichungen an und beweisen unter geeigneten Annahmen ihre facettendefinierende Eigenschaft für die konvexe Hülle der ganzzahligen Lösungen. Basierend auf dem vorgeschlagenen Modell und den starken Ungleichungen entwickeln wir einen neuen Branch-and-Cut-Algorithmus. Jedes Problem kleinerer Dimension ist eine Relaxation des höherdimensionalen Problems. Darüber hinaus besitzt es Anwendungen in verschiedenen Bereichen, wie zum Beispiel im Scheduling. Für die Behandlung der Probleme kleinerer Dimension setzen wir das Gilmore-Gomory-Modell ein, das eine Dantzig-Wolfe-Dekomposition der positionsindizierten Formulierung ist. Um eine Nachbarschaftsstruktur zu erhalten, muss die Basismatrix der optimalen Lösung die consecutive-1’s-Eigenschaft erfüllen. Für die Konstruktion solcher Matrizen entwickeln wir neue Branch-and-Price-Algorithmen, die sich durch Strategien zur Enumeration von partiellen Lösungen unterscheiden. Wir beweisen auch einige Charakteristiken von partiellen Lösungen, die das Hilfsproblem der Spaltengenerierung verschärfen. Für die nichtlineare Modellierung der höherdimensionalen Packungsprobleme untersuchen wir moderne Ansätze des Constraint Programming, modifizieren diese und schlagen neue Dichotomie- und Überschneidungsstrategien für die Verzweigung vor. Für die Verstärkung der Constraint Propagation stellen wir neue Ablehnungskriterien vor. Wir nutzen dabei 1D Relaxationen mit Intervallen und verbotenen Paaren, erweiterte Streifen-Relaxation, 2D Scheiben-Relaxation und 1D Scheiben-Streifen-Relaxation mit verbotenen Paaren. Alle vorgestellten Kriterien basieren auf Relaxationen durch Probleme kleinerer Dimension, die wir weiter durch die LP-Relaxation des Gilmore-Gomory-Modells abschwächen. Wir schließen mit Umsetzungsfragen und numerischen Experimenten aller vorgeschlagenen Ansätze.
84

The Possibility of Branch Conformation in Azotobacter Vinelandii Chromosomal DNA Carrying Multiple Gene Copies and Its Folded State in the Cell

Choi, Munhyeong 08 1900 (has links)
Chromosomal DNA of A. vinelandii thought to carry multiple gene copies was examined in efforts to visualize its chromosomal structure using electron microscopy. The chromosomal DNA of A. vinelandii may have multiple circular genomic units carrying multiple copies of genes. Three possible branch construction schemes and their replication modes are postulated in this study.
85

Improving Branch Prediction Accuracy Via Effective Source Information And Prediction Algorithms

Gao, Hongliang 01 January 2008 (has links)
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
86

Side-Channel Attacks in RISC-V BOOM Front-end

Chavda, Rutvik Jayantbhai 29 June 2023 (has links)
The prevalence of side-channel attacks exploiting hardware vulnerabilities leads to the exfil- tration of secretive data such as secret keys, which poses a significant threat to the security of modern processors. The RISC-V BOOM core is an open-source modern processor design widely utilized in research and industry. It enables experimentation with microarchitec- tures and memory hierarchies for optimized performance in various workloads. The RISC-V BOOM core finds application in the IoT and Embedded systems sector, where addressing side-channel attacks becomes crucial due to the significant emphasis on security. While prior studies on BOOM mainly focus on the side channel in the memory hierarchy such as caches or physical attacks such as power side channel. Recently, the front-end of microprocessors, which is responsible for fetching and decoding instructions, is found to be another potential source of side-channel attacks on Intel Processors. In this study, I present four timing-based side-channel attacks that leverage components in the front-end of BOOM. I tested the effectiveness of the attacks using a simulator and Xilinx VCU118 FPGA board. Finally, I provided possible mitigation techniques for these types of attacks to improve the overall security of modern processors. Our findings underscore the importance of identifying and addressing vulnerabilities in the front-end of modern pro- cessors, such as the BOOM core, to mitigate the risk of side-channel attacks and enhance system security. / Master of Science / In today's digital landscape, the security of modern processors is threatened by the increasing prevalence of side-channel attacks that exploit hardware vulnerabilities. These attacks are a type of security threat that allows attackers to extract sensitive information from computer systems by analyzing the physical behavior. The risk of such attacks is further amplified when multiple users or applications share the same hardware resources. Attackers can ex- ploit the interactions and dependencies among shared resources to gather information and compromise the integrity and confidentiality of critical data. The RISC-V BOOM core, a widely utilized modern processor design, is not immune to these side-channel attacks. This issue demands urgent attention, especially considering its deploy- ment in data-sensitive domains such as IoT and embedded systems. Previous studies have focused on side-channel vulnerabilities in other areas of BOOM, ne- glecting the front-end. However, the front-end, responsible for processing initial information, has recently emerged as another potential target for side-channel attacks. To address this, I conducted a study on the vulnerability of the RISC-V BOOM core's front-end. By conduct- ing tests using both a software-based simulator and a physical board, I uncovered potential security threats and discussed potential techniques to mitigate these risks, thereby enhanc- ing the overall security of modern processors. These findings underscore the significance of addressing vulnerabilities in the front-end of processors to prevent side-channel attacks and safeguard against potential malicious activities.
87

The struggle for a federal office of education for Canada /

Larose, Wesley Allan. January 1975 (has links)
No description available.
88

Estratégias de resolução para o problema de job-shop flexível / Solution approaches for flexible job-shop scheduling problem

Previero, Wellington Donizeti 16 September 2016 (has links)
Nesta tese apresentamos duas estratégias para resolver o problema de job-shop flexível com o objetivo de minimizar o makespan. A primeira estratégia utiliza um algoritmo branch and cut (B&C) e a segunda abordagens matheuristics. O algoritmo B&C utiliza novas classes de inequações válidas, originalmente formulada para o problema de job-shop e estendida para o problema em questão. Para que as inequações válidas sejam eficientes, o modelo proposto por Birgin et al, (2014) (A milp model for an extended version of the fexible job shop problem. Optimization Letters, Springer, v. 8, n. 4, 1417-1431), é reformulado (MILP-2). A segunda estratégia utiliza as matheuristcs local branching e diversification, refining and tight-refining. Os experimentos computacionais mostraram que a inclusão dos planos de corte melhoram a relaxação do modelo MILP-2 e a qualidade das soluções. O algoritmo B&C reduziu o gap e o número de nós explorados para uma grande quantidade de instâncias. As abordagens matheuristics tiveram um excelente desempenho. Do total de 59 instâncias analisadas, somente em 3 problemas a resolução do modelo MILP-1 obteve melhores resultados do que as abordagens matheuristcs / This thesis proposes two approaches to solve the flexible job-shop scheduling problem to minimize the makespan. The first strategy uses a branch and cut algorithm (B&C) and the second approach is based on matheuristics. The B&C algorithm uses new classes of valid inequalities, originally formulated for job-shop scheduling problems and extended to the problem at hand. The second approach uses the matheuristics local branching and diversification, refining and tight-refining. For all valid inequalities to be effective, the precedence variable based model proposed by Birgin et al, (2014) (A milp model for an extended version of the fexible job shop problem. Optimization Letters, Springer, v. 8, n. 4, 1417-1431), is reformulated (MILP-2). The computational experiments showed that the inclusion of cutting planes tightened the linear programming relaxations and improved the quality of solutions. B&C algorithm reduced the gap value and the number of nodes explored in a large number of instances. The matheuristics approaches had an excellent performance. From 59 instances analized, MILP-1-Gurobi showed better results than matheuristics approaches in only 3 problems
89

Um método híbrido para o problema de dimensionamento de lotes / A hybrid method for the lot sizing problem

Cherri, Luiz Henrique 27 February 2013 (has links)
Neste trabalho, abordamos métodos de resolução para o problema de dimensionamento de lotes que contempla o planejamento da produção de vários produtos em múltiplas máquinas. A fabricação dos produtos consome tempo de produção e preparação de uma capacidade de produção limitada. A demanda pelos produtos é conhecida e pode ser atendida com atraso durante um horizonte de planejamento finito. O objetivo é minimizar a soma dos custos de produção, preparação para a produção, estoque dos produtos e atraso na entrega destes. Em uma primeira etapa, desenvolvemos uma busca tabu determinística baseada em outra, aleatória, que foi apresentada na literatura. Com isso, realizamos uma análise sobre a influência de fatores aleatórios sobre heurísticas do tipo busca tabu quando aplicadas ao problema estudado. Posteriormente, desenvolvemos um método híbrido baseado em busca tabu, branch-and-cut e programação linear para a resolução do problema. Nos testes computacionais realizados, o método proposto mostrou-se competitivo quando comparado a outras heurísticas apresentadas na literatura / This paper proposes two methods to solve the capacitated lot-sizing problem with multiple products and parallel machines. The manufacturing of products consumes machines capacity (production time and setup time), which is scarce. The demand for the products is known and can be met with backlogging. The objective is to minimize the sum of production, setup, holding and backlog costs. In a first step, we developed a deterministic tabu search heuristic based on a random version from the literature and then conducted an analysis of the influence of random factors on tabu search heuristics when applied to solve the studied problem. Subsequently, we designed a hybrid method based on tabu search, branch-andcut and linear programming. Computational experiments show that this hybrid method is competitive with other heuristics presented in the literature
90

Minimização de funções decomponíveis em curvas em U definidas sobre cadeias de posets -- algoritmos e aplicações / Minimization of decomposable in U-shaped curves functions defined on poset chains -- algorithms and applications

Reis, Marcelo da Silva 28 November 2012 (has links)
O problema de seleção de características, no contexto de Reconhecimento de Padrões, consiste na escolha de um subconjunto X de um conjunto S de características, de tal forma que X seja \"ótimo\" dentro de algum critério. Supondo a escolha de uma função custo c apropriada, o problema de seleção de características é reduzido a um problema de busca que utiliza c para avaliar os subconjuntos de S e assim detectar um subconjunto de características ótimo. Todavia, o problema de seleção de características é NP-difícil. Na literatura existem diversos algoritmos e heurísticas propostos para abordar este problema; porém, quase nenhuma dessas técnicas explora o fato que existem funções custo cujos valores são estimados a partir de uma amostra e que descrevem uma \"curva em U\" nas cadeias do reticulado Booleano (P(S),<=), um fenômeno bem conhecido em Reconhecimento de Padrões: conforme aumenta-se o número de características consideradas, há uma queda no custo do subconjunto avaliado, até o ponto em que a limitação no número de amostras faz com que seguir adicionando características passe a aumentar o custo, devido ao aumento no erro de estimação. Em 2010, Ris e colegas propuseram um novo algoritmo para resolver esse caso particular do problema de seleção de características, que aproveita o fato de que o espaço de busca pode ser organizado como um reticulado Booleano, assim como a estrutura de curvas em U das cadeias do reticulado, para encontrar um subconjunto ótimo. Neste trabalho estudamos a estrutura do problema de minimização de funções custo cujas cadeias são decomponíveis em curvas em U (problema U-curve), provando que o mesmo é NP-difícil. Mostramos que o algoritmo de Ris e colegas possui um erro que o torna de fato sub-ótimo, e propusemos uma versão corrigida e melhorada do mesmo, o algoritmo U-Curve-Search (UCS). Apresentamos também duas variações do algoritmo UCS que controlam o espaço de busca de forma mais sistemática. Introduzimos dois novos algoritmos branch-and-bound para abordar o problema, chamados U-Curve-Branch-and-Bound (UBB) e Poset-Forest-Search (PFS). Para todos os algoritmos apresentados nesta tese, fornecemos análise de complexidade de tempo e, para alguns deles, também prova de corretude. Implementamos todos os algoritmos apresentados utilizando o arcabouço featsel, também desenvolvido neste trabalho; realizamos experimentos ótimos e sub-ótimos com instâncias de dados reais e simulados e analisamos os resultados obtidos. Por fim, propusemos um relaxamento do problema U-curve que modela alguns tipos de projeto de classificadores; também provamos que os algoritmos UCS, UBB e PFS resolvem esta versão generalizada do problema. / The feature selection problem, in the context of Pattern Recognition, consists in the choice of a subset X of a set S of features, such that X is \"optimal\" under some criterion. If we assume the choice of a proper cost function c, then the feature selection problem is reduced to a search problem, which uses c to evaluate the subsets of S, therefore finding an optimal feature subset. However, the feature selection problem is NP-hard. Although there are a myriad of algorithms and heuristics to tackle this problem in the literature, almost none of those techniques explores the fact that there are cost functions whose values are estimated from a sample and describe a \"U-shaped curve\" in the chains of the Boolean lattice o (P(S),<=), a well-known phenomenon in Pattern Recognition: for a fixed number of samples, the increase in the number of considered features may have two consequences: if the available sample is enough to a good estimation, then it should occur a reduction of the estimation error, otherwise, the lack of data induces an increase of the estimation error. In 2010, Ris et al. proposed a new algorithm to solve this particular case of the feature selection problem: their algorithm takes into account the fact that the search space may be organized as a Boolean lattice, as well as that the chains of this lattice describe a U-shaped curve, to find an optimal feature subset. In this work, we studied the structure of the minimization problem of cost functions whose chains are decomposable in U-shaped curves (the U-curve problem), and proved that this problem is actually NP-hard. We showed that the algorithm introduced by Ris et al. has an error that leads to suboptimal solutions, and proposed a corrected and improved version, the U-Curve-Search (UCS) algorithm. Moreover, to manage the search space in a more systematic way, we also presented two modifications of the UCS algorithm. We introduced two new branch-and-bound algorithms to tackle the U-curve problem, namely U-Curve-Branch-and-Bound (UBB) and Poset-Forest-Search (PFS). For each algorithm presented in this thesis, we provided time complexity analysis and, for some of them, also proof of correctness. We implemented each algorithm through the featsel framework, which was also developed in this work; we performed optimal and suboptimal experiments with instances from real and simulated data, and analyzed the results. Finally, we proposed a generalization of the U-curve problem that models some kinds of classifier design; we proved the correctness of the UCS, UBB, and PFS algorithms for this generalized version of the U-curve problem.

Page generated in 0.0587 seconds