Data Structures And Algorithms | 2019-05-03

in #algorithms5 years ago

Data Structures And Algorithms


Near-Linear Approximation Algorithms for Scheduling Problems with Batch Setup Times (1810.01223v2)

Max A. Deppert, Klaus Jansen

2018-10-02

We investigate the scheduling of jobs divided into classes on identical parallel machines. For every class there is a setup time which is required whenever a machine switches from the processing of one class to another class. The objective is to find a schedule that minimizes the makespan. We give near-linear approximation algorithms for the following problem variants: the non-preemptive context where jobs may not be preempted, the preemptive context where jobs may be preempted but not parallelized, as well as the splittable context where jobs may be preempted and parallelized. We present the first algorithm improving the previously best approximation ratio of to a better ratio of in the preemptive case. In more detail, for all three flavors we present an approximation ratio with running time , ratio in time as well as a ratio of . The -approximate algorithms have different running times. In the non-preemptive case we get time where is the largest value of the input. The splittable approximation runs in time whereas the preemptive algorithm has a running time . So far, no PTAS is known for the preemptive problem without restrictions, so we make progress towards that question. Recently Jansen et al. found an EPTAS for the splittable and non-preemptive case but with impractical running times exponential in .

Fast hashing with Strong Concentration Bounds (1905.00369v1)

Anders Aamand, Jakob B. T. Knudsen, Mathias B. T. Knudsen, Peter M. R. Rasmussen, Mikkel Thorup

2019-05-01

Previous work on tabulation hashing of P\v{a}tra\c{s}cu and Thorup from STOC'11 on simple tabulation and from SODA'13 on twisted tabulation offered Chernoff-style concentration bounds on hash based sums, but under some quite severe restrictions on the expected values of these sums. More precisely, the basic idea in tabulation hashing is to view a key as consisting of characters, e.g., a 64-bit key as characters of 8-bits. The character domain should be small enough that character tables of size fit in fast cache. The schemes then use tables of this size, so the space of tabulation hashing is . However the above concentration bounds only apply if the expected sums are . To see the problem, consider the very simple case where we use tabulation hashing to throw balls into bins and apply Chernoff bounds to the number of balls that land in a given bin. We are fine if , for then the expected value is . However, if bins as when tossing unbiased coins, then the expectancy is for large data sets, e.g., data sets that don't fit in fast cache. To handle expectations that go beyond the limits of our small space, we need a much more advanced analysis of simple tabulation, plus a new tabulation technique that we call tabulation-permutation hashing which is at most twice as slow as simple tabulation. No other hashing scheme of comparable speed offers similar Chernoff-style concentration bounds.

A faster hafnian formula for complex matrices and its benchmarking on a supercomputer (1805.12498v3)

Andreas Björklund, Brajesh Gupt, Nicolás Quesada

2018-05-31

We introduce new and simple algorithms for the calculation of the number of perfect matchings of complex weighted, undirected graphs with and without loops. Our compact formulas for the hafnian and loop hafnian of complex matrices run in time, are embarrassingly parallelizable and, to the best of our knowledge, are the fastest exact algorithms to compute these quantities. Despite our highly optimized algorithm, numerical benchmarks on the Titan supercomputer with matrices up to size indicate that one would require the 288000 CPUs of this machine for about a month and a half to compute the hafnian of a matrix.

Independent Set Reconfiguration Parameterized by Modular-Width (1905.00340v1)

Rémy Belmonte, Tesshu Hanaka, Michael Lampis, Hirotaka Ono, Yota Otachi

2019-05-01

Independent Set Reconfiguration is one of the most well-studied problems in the setting of combinatorial reconfiguration. It is known that the problem is PSPACE-complete even for graphs of bounded bandwidth. This fact rules out the tractability of parameterizations by most well-studied structural parameters as most of them generalize bandwidth. In this paper, we study the parameterization by modular-width, which is not comparable with bandwidth. We show that the problem parameterized by modular-width is fixed-parameter tractable under all previously studied rules TAR, TJ, and TS. The result under TAR resolves an open problem posed by Bonsma [WG 2014, JGT 2016].

Limits on the Universal Method for Matrix Multiplication (1812.08731v2)

Josh Alman

2018-12-20

In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen's Laser Method and Cohn and Umans' Group Theoretic Method. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor cannot yield a bound on , the exponent of matrix multiplication, better than . By comparison, it was previously only known that the weaker `Galactic Method' applied to could not achieve an exponent of . We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor , it achieves if and only if it is possible for the Universal method applied to to achieve . Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd achieved a bound of , by applying the Laser Method to . By our result, the fact that they did not achieve implies a lower bound on the Universal Method applied to . Indeed, if it were possible for the Universal Method applied to to achieve , then Coppersmith and Winograd's application of the Laser Method would have achieved .

Parameterized Complexity of Conflict-free Graph Coloring (1905.00305v1)

Hans L. Bodlaender, Sudeshna Kolay, Astrid Pieterse

2019-05-01

Given a graph G, a q-open neighborhood conflict-free coloring or q-ONCF-coloring is a vertex coloring such that for each vertex there is a vertex in that is uniquely colored from the rest of the vertices in . When we replace by the closed neighborhood , then we call such a coloring a q-closed neighborhood conflict-free coloring or simply q-CNCF-coloring. In this paper, we study the NP-hard decision questions of whether for a constant q an input graph has a q-ONCF-coloring or a q-CNCF-coloring. We will study these two problems in the parameterized setting. First of all, we study running time bounds on FPT-algorithms for these problems, when parameterized by treewidth. We improve the existing upper bounds, and also provide lower bounds on the running time under ETH and SETH. Secondly, we study the kernelization complexity of both problems, using vertex cover as the parameter. We show that both -ONCF-coloring and -CNCF-coloring cannot have polynomial kernels when parameterized by the size of a vertex cover unless . However, we obtain a polynomial kernel for 2-CNCF-coloring parameterized by vertex cover. We conclude with some combinatorial results. Denote and to be the minimum number of colors required to ONCF-color and CNCF-color G, respectively. Upper bounds on with respect to structural parameters like minimum vertex cover size, minimum feedback vertex set size and treewidth are known. To the best of our knowledge only an upper bound on with respect to minimum vertex cover size was known. We provide tight bounds for with respect to minimum vertex cover size. Also, we provide the first upper bounds on with respect to minimum feedback vertex set size and treewidth.

An improved analysis of the Mömke-Svensson algorithm for graph-TSP on subquartic graphs (1407.2524v3)

Alantha Newman

2014-07-09

M"omke and Svensson presented a beautiful new approach for the traveling salesman problem on a graph metric (graph-TSP), which yields a -approximation guarantee on subcubic graphs as well as a substantial improvement over the -approximation guarantee of Christofides' algorithm on general graphs. The crux of their approach is to compute an upper bound on the minimum cost of a circulation in a particular network, , where is the input graph and is a carefully chosen spanning tree. The cost of this circulation is directly related to the number of edges in a tour output by their algorithm. Mucha subsequently improved the analysis of the circulation cost, proving that M"omke and Svensson's algorithm for graph-TSP has an approximation ratio of at most on general graphs. This analysis of the circulation is local, and vertices with degree four and five can contribute the most to its cost. Thus, hypothetically, there could exist a subquartic graph (a graph with degree at most four at each vertex) for which Mucha's analysis of the M"omke-Svensson algorithm is tight. We show that this is not the case and that M"omke and Svensson's algorithm for graph-TSP has an approximation guarantee of at most on subquartic graphs. To prove this, we present different methods to upper bound the minimum cost of a circulation on the network . Our approximation guarantee holds for all graphs that have an optimal solution to a standard linear programming relaxation of graph-TSP with subquartic support.

Assignment Mechanisms under Distributional Constraints (1810.04331v2)

Itai Ashlagi, Amin Saberi, Ali Shameli

2018-10-10

We study the assignment problem of objects to agents with heterogeneous preferences under distributional constraints. Each agent is associated with a publicly known type and has a private ordinal ranking over objects. We are interested in assigning as many agents as possible. Our first contribution is a generalization of the well-known and widely used serial dictatorship. Our mechanism maintains several desirable properties of serial dictatorship, including strategyproofness, Pareto efficiency, and computational tractability while satisfying the distributional constraints with a small error. We also propose a generalization of the probabilistic serial algorithm, which finds an ordinally efficient and envy-free assignment, and also satisfies the distributional constraints with a small error. We show, however, that no ordinally efficient and envy-free mechanism is also weakly strategyproof. Both of our algorithms assign at least the same number of students as the optimum fractional assignment.

Solving large Minimum Vertex Cover problems on a quantum annealer (1904.00051v3)

Elijah Pelofske, Georg Hahn, Hristo N. Djidjev

2019-03-29

We consider the minimum vertex cover problem having applications in e.g. biochemistry and network security. Quantum annealers can find the optimum solution of such NP-hard problems, given they can be embedded on the hardware. This is often infeasible due to limitations of the hardware connectivity structure. This paper presents a decomposition algorithm for the minimum vertex cover problem: The algorithm recursively divides an arbitrary problem until the generated subproblems can be embedded and solved on the annealer. To speed up the decomposition, we propose several pruning and reduction techniques. The performance of our algorithm is assessed in a simulation study.

Separate Chaining Meets Compact Hashing (1905.00163v1)

Dominik Köppl

2019-05-01

While separate chaining is a common strategy for resolving collisions in a hash table taught in most textbooks, compact hashing is a less common technique for saving space when hashing integers whose domain is relatively small with respect to the problem size. It is widely believed that hash tables waste a considerable amount of memory, as they either leave allocated space untouched (open addressing) or store additional pointers (separate chaining). For the former, Cleary introduced the compact hashing technique that stores only a part of a key to save space. However, as can be seen by the line of research focusing on compact hash tables with open addressing, there is additional information, called displacement, required for restoring a key. There are several representations of this displacement information with different space and time trade-offs. In this article, we introduce a separate chaining hash table that applies the compact hashing technique without the need for the displacement information. Practical evaluations reveal that insertions in this hash table are faster or use less space than all previously known compact hash tables on modern computer architectures when storing sufficiently large satellite data.



Sort:  

Congratulations @binarytree! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You published more than 50 posts. Your next target is to reach 60 posts.

You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

Coin Marketplace

STEEM 0.28
TRX 0.12
JST 0.032
BTC 66439.14
ETH 3005.38
USDT 1.00
SBD 3.68