Data Structures And Algorithms | 2019-07-10

in #datastructure5 years ago

Data Structures And Algorithms


Multi-budgeted directed cuts (1810.06848v2)

Stefan Kratsch, Shaohua Li, Dániel Marx, Marcin Pilipczuk, Magnus Wahlström

2018-10-16

We study multi-budgeted variants of the classic minimum cut problem and graph separation problems that turned out to be important in parameterized complexity: Skew Multicut and Directed Feedback Arc Set. In our generalization, we assign colors to some edges and give separate budgets . Let be the set of edges of color . The solution for the multi-budgeted variant of a graph separation problem not only needs to satisfy the usual separation requirements, but also needs to satisfy that for every . Contrary to the classic minimum cut problem, the multi-budgeted variant turns out to be NP-hard even for . We propose FPT algorithms parameterized by for all three problems. To this end, we develop a branching procedure for the multi-budgeted minimum cut problem that measures the progress of the algorithm not by reducing as usual, by but elevating the capacity of some edges and thus increasing the size of maximum source-to-sink flow. Using the fact that a similar strategy is used to enumerate all important separators of a given size, we merge this process with the flow-guided branching and show an FPT bound on the number of (appropriately defined) important multi-budgeted separators. This allows us to extend our algorithm to the Skew Multicut and Directed Feedback Arc Set problems. Furthermore, we show connections of the multi-budgeted variants with weighted variants of the directed cut problems and the Chain -SAT problem, whose parameterized complexity remains an open problem. We show that these problems admit a bounded-in-parameter number of "maximally pushed" solutions (in a similar spirit as important separators are maximally pushed), giving somewhat weak evidence towards their tractability.

Collapsing Superstring Conjecture (1809.08669v2)

Alexander Golovnev, Alexander S. Kulikov, Alexander Logunov, Ivan Mihajlin

2018-09-23

In the Shortest Common Superstring (SCS) problem, one is given a collection of strings, and needs to find a shortest string containing each of them as a substring. SCS admits -approximation in polynomial time (Mucha, SODA'13). While this algorithm and its analysis are technically involved, the 30 years old Greedy Conjecture claims that the trivial and efficient Greedy Algorithm gives a 2-approximation for SCS. We develop a graph-theoretic framework for studying approximation algorithms for SCS. The framework is reminiscent of the classical 2-approximation for Traveling Salesman: take two copies of an optimal solution, apply a trivial edge-collapsing procedure, and get an approximate solution. In this framework, we observe two surprising properties of SCS solutions, and we conjecture that they hold for all input instances. The first conjecture, that we call Collapsing Superstring conjecture, claims that there is an elementary way to transform any solution repeated twice into the same graph . This conjecture would give an elementary 2-approximate algorithm for SCS. The second conjecture claims that not only the resulting graph is the same for all solutions, but that can be computed by an elementary greedy procedure called Greedy Hierarchical Algorithm. While the second conjecture clearly implies the first one, perhaps surprisingly we prove their equivalence. We support these equivalent conjectures by giving a proof for the special case where all input strings have length at most 3. We prove that the standard Greedy Conjecture implies Greedy Hierarchical Conjecture, while the latter is sufficient for an efficient greedy 2-approximate approximation of SCS. Except for its (conjectured) good approximation ratio, the Greedy Hierarchical Algorithm provably finds a 3.5-approximation.

SANNS: Scaling Up Secure Approximate k-Nearest Neighbors Search (1904.02033v3)

Hao Chen, Ilaria Chillotti, Yihe Dong, Oxana Poburinnaya, Ilya Razenshteyn, M. Sadegh Riazi

2019-04-03

We present new secure protocols for approximate -nearest neighbor search (-NNS) over the Euclidean distance in the semi-honest model, which scale to massive datasets. One of the new ingredients is a circuit for the approximate top- selection from numbers that is built from comparators. Using this circuit as a subroutine, we design new -NNS algorithms and two corresponding secure protocols: 1) optimized linear scan; 2) clustering-based sublinear time algorithm. The new secure protocols utilize a combination of additively-homomorphic encryption, garbled circuits and oblivious RAM. Along the way, we introduce various optimizations to these primitives, which drastically improve concrete efficiency. We evaluate the new protocols empirically and show that they are able to handle datasets that are significantly larger than in the prior work. For instance, running on two standard Azure instances within the same availability zone, for a dataset of -dimensional descriptors of images, we can find nearest neighbors with average accuracy in under seconds improving upon prior work by at least two orders of magnitude.

Contraction Clustering (RASTER): A Very Fast Big Data Algorithm for Sequential and Parallel Density-Based Clustering in Linear Time, Constant Memory, and a Single Pass (1907.03620v1)

Gregor Ulm, Simon Smith, Adrian Nilsson, Emil Gustavsson, Mats Jirstrand

2019-07-08

Clustering is an essential data mining tool for analyzing and grouping similar objects. In big data applications, however, many clustering algorithms are infeasible due to their high memory requirements and/or unfavorable runtime complexity. In contrast, Contraction Clustering (RASTER) is a single-pass algorithm for identifying density-based clusters with linear time complexity. Due to its favorable runtime and the fact that its memory requirements are constant, this algorithm is highly suitable for big data applications where the amount of data to be processed is huge. It consists of two steps: (1) a contraction step which projects objects onto tiles and (2) an agglomeration step which groups tiles into clusters. This algorithm is extremely fast in both sequential and parallel execution. In single-threaded execution on a contemporary workstation, an implementation in Rust processes a batch of 500 million points with 1 million clusters in less than 50 seconds. The speedup due to parallelization is significant, amounting to a factor of around 4 on an 8-core machine.

Stationary time-vertex signal processing (1611.00255v3)

Andreas Loukas, Nathanaël Perraudin

2016-11-01

This paper considers regression tasks involving high-dimensional multivariate processes whose structure is dependent on some {known} graph topology. We put forth a new definition of time-vertex wide-sense stationarity, or joint stationarity for short, that goes beyond product graphs. Joint stationarity helps by reducing the estimation variance and recovery complexity. In particular, for any jointly stationary process (a) one reliably learns the covariance structure from as little as a single realization of the process, and (b) solves MMSE recovery problems, such as interpolation and denoising, in computational time nearly linear on the number of edges and timesteps. Experiments with three datasets suggest that joint stationarity can yield accuracy improvements in the recovery of high-dimensional processes evolving over a graph, even when the latter is only approximately known, or the process is not strictly stationary.

A simple proof for a polynomial time 3-SAT solver (1903.10081v7)

Ortho Flint, Asanka Wickramasinghe, Jason Brasse, Christopher Fowler

2019-03-24

In this paper, we provide a polynomial time (and space), algorithm that determines satisfiability of 3-SAT. The complexity analysis for the algorithm takes into account no efficiency and yet provides a low enough bound, that efficient versions are practical with respect to today's hardware. We accompany this paper with a serial version of the algorithm without non-trivial efficiencies.

More Hierarchy in Route Planning Using Edge Hierarchies (1907.03535v1)

Demian Hespe, Peter Sanders

2019-07-08

A highly successful approach to route planning in networks (particularly road networks) is to identify a hierarchy in the network that allows faster queries after some preprocessing that basically inserts additional "shortcut"-edges into a graph. In the past there has been a succession of techniques that infer a more and more fine grained hierarchy enabling increasingly more efficient queries. This appeared to culminate in contraction hierarchies that assign one hierarchy level to each vertex. In this paper we show how to identify an even more fine grained hierarchy that assigns one level to each edge of the network. Our findings indicate that this can lead to considerably smaller search spaces in terms of visited edges. Currently, this does not result in improved query times so that it remains an open question whether these edge hierarchies can lead to overall improved performance. However, we believe that the technique as such is a noteworthy enrichment of the portfolio of available techniques that might prove useful in the future.

Inapproximability Results for Scheduling with Interval and Resource Restrictions (1907.03526v1)

Marten Maack, Klaus Jansen

2019-07-08

In the restricted assignment problem, the input consists of a set of machines and a set of jobs each with a processing time and a subset of eligible machines. The goal is to find an assignment of the jobs to the machines minimizing the makespan, that is, the maximum summed up processing time any machine receives. Herein, jobs should only be assigned to those machines on which they are eligible. It is well-known that there is no polynomial time approximation algorithm with an approximation guarantee of less than 1.5 for the restricted assignment problem unless P=NP. In this work, we show hardness results for variants of the restricted assignment problem with particular types of restrictions. In the case of interval restrictions the machines can be totally ordered such that jobs are eligible on consecutive machines. We resolve the open question of whether the problem admits a polynomial time approximation scheme (PTAS) in the negative (unless P=NP). There are several special cases of this problem known to admit a PTAS. Furthermore, we consider a variant with resource restriction where each machine has capacities and each job demands for a fixed number of resources. A job is eligible on a machine if its demand is at most the capacity of the machine for each resource. For one resource, this problem is known to admit a PTAS, for two, the case of interval restrictions is contained, and in general, the problem is closely related to unrelated scheduling with a low rank processing time matrix. We show that there is no polynomial time approximation algorithm with a rate smaller than 48/47 or 1.5 for scheduling with resource restrictions with 2 or 4 resources, respectively, unless P=NP. All our results can be extended to the so called Santa Claus variants of the problems where the goal is to maximize the minimal processing time any machine receives.

A Strongly Polynomial Algorithm for Finding a Shortest Non-zero Path in Group-Labeled Graphs (1906.04062v2)

Yutaro Yamaguchi

2019-06-10

Finding a shortest path or cycle in graphs is a fundamental problem. In this paper, we study the problem of finding a shortest non-zero path/cycle in group-labeled graphs with nonnegative edge length, which includes the following two types of tractable variants in undirected graphs, depending on the group in question. One is the parity-constrained shortest path/cycle problem, and the other is computing a shortest noncontractible cycle in surface-embedded graphs. For the shortest non-zero path problem with respect to finite abelian groups, Kobayashi and Toyooka (2017) proposed a randomized, pseudopolynomial algorithm via permanent computation. For a slightly more general class of groups, Yamaguchi (2016) showed a reduction of the problem to the weighted linear matroid parity problem. In particular, some cases are solved in strongly polynomial time via the reduction with the aid of a deterministic, polynomial algorithm for the weighted linear matroid parity problem developed by Iwata and Kobayashi (2017), which generalizes a well-known fact that the parity-constrained shortest path problem is solved via weighted matching. In this paper, as the first general solution independent of the group, we present a rather simple, deterministic, and strongly polynomial algorithm for the shortest non-zero path problem. The algorithm is based on Dijkstra's algorithm for the unconstrained shortest path problem and Edmonds' blossom shrinking technique in matching algorithms, and clarifies a common tractable feature behind the parity and topological constraints in the shortest path/cycle problem.

Fair Byzantine Agreements for Blockchains (1907.03437v1)

Tzu-Wei Chao, Hao Chung, Po-Chun Kuo

2019-07-08

Byzantine general problem is the core problem of the consensus algorithm, and many protocols are proposed recently to improve the decentralization level, the performance and the security of the blockchain. There are two challenging issues when the blockchain is operating in practice. First, the outcomes of the consensus algorithm are usually related to the incentive model, so whether each participant's value has an equal probability of being chosen becomes essential. However, the issues of fairness are not captured in the traditional security definition of Byzantine agreement. Second, the blockchain should be resistant to network failures, such as cloud services shut down or malicious attack, while remains the high performance most of the time. This paper has two main contributions. First, we propose a novel notion called fair validity for Byzantine agreement. Intuitively, fair validity lower-bounds the expected numbers that honest nodes' values being decided if the protocol is executed many times. However, we also show that any Byzantine agreement could not achieve fair validity in an asynchronous network, so we focus on synchronous protocols. This leads to our second contribution: we propose a fair, responsive and partition-resilient Byzantine agreement protocol tolerating up to 1/3 corruptions. Fairness means that our protocol achieves fair validity. Responsiveness means that the termination time only depends on the actual network delay instead of depending on any pre-determined time bound. Partition-resilience means that the safety still holds even if the network is partitioned, and the termination will hold if the partition is resolved.



Coin Marketplace

STEEM 0.27
TRX 0.12
JST 0.032
BTC 65686.10
ETH 2943.95
USDT 1.00
SBD 3.73