Post on 12-Feb-2020
Max-SAT Algorithms For Real World Instances
Pedro Filipe Medeiros da Silva
Dissertation for the achievement of the degree
Master in Information Systems and Computer
Engineering
Chairman: Professora Doutora Maria dos Remedios Vaz Pereira Lopes CravoSupervisor: Professora Doutora Maria Ines Camarate de Campos Lynce de FariaObservers: Professor Doutor Luis Jorge Bras Monteiro Guerra e Silva
October 2010
Abstract
At first a glance, it seems that there is no reason to use Max-SAT to solve
problems, due to its difficulty. Actually, there are many problems that can be
more easily translated to Max-SAT than to SAT. Max-SAT is optimization-
based, meaning that it finds the best solution from all feasible solutions. This
contrasts to SAT, that is a decision based problem, meaning that it decides
whether an assignment is a solution or not.
The Maximum Satisfiability (Max-SAT) problem is an optimization version
of the Boolean Satisfiability problem (SAT) that has gained recognition in the
last years. Still, most Max-SAT solvers lag behind the efficiency of SAT solvers
for most problems, especially in real world problems.
Many real world problems are optimization based, and so, are more suitable
to be translated to Max-SAT, e.g., FPGA Routing[1], Design Debugging[2],
Bioinformatics[3], Scheduling [4] and Probabilistic Reasoning[5].
The objective of this thesis is to study the current Max-SAT solvers and the
two major approaches used. The branch-and-bound technique creates new for-
mulas with partial assignments until a solution is found. It stores the solution
as a bound for other assignments to prevent unnecessary branches. The trans-
lation based approach, transforms the Max-SAT problem into another such as
SAT.
Both approaches have auxiliary techniques to help them. Branch-and-bound
uses inference rules whenever it creates a new formula. These techniques sim-
ii
iii
plify the formula and prevent exploring some new branches. Translation based
approaches identify unsatisfiable sub-formulas to cut the number of steps needed
to translate to SAT.
While branch-and-bound solvers solve mostly randomly generated instances,
the translation based solvers are more adapted for industrial instances where
the structure of unsatisfiable sub-formulas are heterogeneous.
We propose combining these two different approaches, by using a pre-processor
using inference rules and then a translation based solver. The objective is to
solve more industrial instances, that reflect real-world problems, with a trans-
lation based solver.
iv
Contents
1 Introduction 1
1.1 Background definitions . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Propositional Logic . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Boolean Satisfaction problem . . . . . . . . . . . . . . . . 3
1.1.3 Max-SAT problem . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Contributions of this dissertation . . . . . . . . . . . . . . . . . . 5
1.3 Organization of this dissertation . . . . . . . . . . . . . . . . . . 5
2 SAT Algorithms 7
2.1 Definitions and concepts . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Simplification Rules . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Unit Propagation . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Pure Literal Rule . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Resolution Rule . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Davis-Putnam Algorithm . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Davis-Logemann-Loveland Algorithm . . . . . . . . . . . . . . . . 11
2.5 Local search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 GSAT Algorithm . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.2 WalkSAT Algorithm . . . . . . . . . . . . . . . . . . . . . 14
v
CONTENTS vi
2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Max-SAT Algorithms 16
3.1 Definitions and concepts . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Branch and Bound Algorithms . . . . . . . . . . . . . . . . . . . 19
3.2.1 Underestimations . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Translations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 PBO translation . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Combining approaches 33
4.1 Efficient Inference . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.1 Star rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.2 Dominating unit clause . . . . . . . . . . . . . . . . . . . 35
4.1.3 Chain resolution . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.4 Cycle resolution . . . . . . . . . . . . . . . . . . . . . . . 36
5 Testing 37
5.1 Testing Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 Testing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2.2 Random instances . . . . . . . . . . . . . . . . . . . . . . 39
5.2.3 Crafted instances . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.4 Industrial instances . . . . . . . . . . . . . . . . . . . . . . 41
6 Conclusions and Future Work 43
vii
List of Figures
1.1 NP problems class . . . . . . . . . . . . . . . . . . . . . . . . . . 2
5.1 sample of instances solved - random . . . . . . . . . . . . . . . . 40
5.2 sample of instances solved - crafted . . . . . . . . . . . . . . . . . 41
5.3 sample of instances solved - industrial . . . . . . . . . . . . . . . 42
viii
ix
List of Tables
5.1 Pre-processed instances . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Solved random instances . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Solved crafted instances . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Solved industrial instances . . . . . . . . . . . . . . . . . . . . . . 42
x
xi
1
Introduction
1.1 Background definitions
Since the discovery of the original NP-Complete(NP-C) problem[6] it was
proven that many real-life problems are hard to solve with a Turing machine[7].
Modern computers are based in Turing machines, such that any algorithm com-
putable in a computer is also computable on a Turing machine[8]. NP-C is a
complexity class of problems that are NP and NP-hard. NP problems allow
to verify if a solution is correct in polynomial time on a deterministic Turing
Machine. Problems that can be solved in polynomial time by a deterministic
Turing machine are know as P problems and are contained in NP. NP-hard
problems are at least as hard as the hardest problem in NP. This does not mean
that NP-hard problems are NP, it simply states that there is a polynomial re-
duction function that translates any NP problem into a NP-hard problem. Until
now, there is no know algorithm for a NP-hard problem that can be solved in
polynomial time; if there was such algorithm, the NP class would be equal to
the P class. Figure 1.1 is a Venn diagram that shows the relationship between
P, NP, NP-C and NP-hard classes of problems, if NP 6= P.
Although NP-C and NP-hard problems can not be solved efficiently in mod-
ern computers, there are many real world problems that are NP-hard and need
1
CHAPTER 1. INTRODUCTION 2
Figure 1.1: P, NP, NP-complete, and NP-hard class of problems for P 6= NP
to be solved. Today, there are so many problems that are NP-C[9], that the
best way to solve them, is to reduce the problem to a well know target NP-C
problem and solve it as an instance of the target problem. The original NP-C
problem is the Boolean Satisfiability problem (SAT) [6]. The development of
SAT as target problem to solve decision-based problems explains why there is
such a large SAT community.
Maximum satisfiability (Max-SAT) is an optimization version of the SAT
problem that is NP-hard and APX-Complete[10]; without going into much de-
tails, it means that Max-SAT is at least as hard as SAT (it is harder, actually),
and that does not accept a polynomial-time approximation algorithm.
1.1.1 Propositional Logic
In proposition logic, the most used notation to represent a formula is the
conjunctive normal form (CNF). A CNF formula is a conjunction of clauses,
where a clause is a disjunction of literals. Formally, a CNF formula φ is defined
as follows:
φ =
n∧i=1
Ci
Ci =
ki∨j=1
aij
CHAPTER 1. INTRODUCTION 3
where each literal aij is either a positive or negative instance of some Boolean
variable, e.g. x1 or x1, ki is the number of literals in the clause Ci, and n is the
number of clauses in the formula φ.
Example 1.1.1 The CNF formula φ = (x1)∧ (x2 ∨ x3)∧ (x1 ∨ x3 ∨ x2) can be
also be represented for simplification as φ = {x1, x2 ∨ x3, x1 ∨ x3 ∨ x2}. φ has
three clauses, the first clause (x1) is a unit clause, the second clause (x2 ∨ x3)
is a binary clause and the third clause (x1 ∨ x3 ∨ x2) is a ternary clause. Unit
and binary clauses are very important and will be considered later on.
In some problems, such as that follow, it is useful to associate a weight
with each clause; this is called Weighted Conjunctive Normal Form. A WCNF
formula is a conjunction of weighted clauses, where a weighted clause is a pair
of disjunction of literals and its weight.
Example 1.1.2 The WCNF formula φ = {(x1, 2), (x2∨x3, 1), (x1∨x3∨x2, 3)},
has three clauses with weights of 2, 1 and 3, for the first, second and third clause
respectively.
1.1.2 Boolean Satisfaction problem
The Boolean Satisfiability problem (SAT) is to determine whether if given
a CNF formula φ, there exists some assignment to its variables that makes the
formula evaluate to TRUE, in which is said the formula is satisfiable; if otherwise
it is unsatisfiable. A formula φ evaluates to TRUE if all its clauses evaluate to
TRUE, and a clause Ci evaluates to TRUE if at least one of its literals evaluates
to TRUE. The problem of deciding whether a given CNF instance is satisfiable
is the canonical NP-Complete problem.
Example 1.1.3 Given the CNF formula φ = {x1 ∨ x2 ∨ x3, x1 ∨ x2 ∨ x4}, a
solution to the SAT problem is to assign x1 the value TRUE and x2 the value
FALSE (the remaining variables could have either value).
CHAPTER 1. INTRODUCTION 4
1.1.3 Max-SAT problem
The Maximum Satisfiability problem (Max-SAT) is an optimization version
of the SAT problem, whereby one must find an assignment to the formula’s
variables, such that there is a maximum number of satisfied clauses (or the
equivalent of finding the minimum number of unsatisfied clauses).
Example 1.1.4 With a CNF formula φ = {x1, x1∨x2, x1∨x2, x1∨x3, x1∨x3},
the maximum number of satisfiable clauses is four or alternatively the minimum
number of unsatisfiable clauses is one. The solution is to assign x1 to FALSE.
Note that in Max-SAT we define CNF formulas as multi-sets of clauses instead
of sets of clauses as in SAT, given that in Max-SAT we cannot collapse dupli-
cated clauses.
Besides Max-SAT, there are variants that enable the expression of more real
world problems. We take into account three of these extensions:
• In the weighted Max-SAT problem we use WCNF. The problem is to find
an assignment that maximizes the sum of weights of satisfied clauses. A
generalization can occur if all the weights are equal to one, turning the
instance in a general Max-SAT formulation.
• In the partial Max-SAT formulation, each clause in the formula can be
either soft or hard. The hard clauses must be obligatorily satisfied, while
the number of satisfied soft clauses must be maximized. A special case
could simplify to a SAT problem (if all the clauses are hard) or to a Max-
SAT problem (if all the clauses are soft).
• The weighted partial Max-SAT problem is the combination of partial Max-
SAT problem, where clauses are hard or soft, and weighted Max-SAT
problem, where soft clauses have a weight value. One must find an assign-
ment that satisfies the hard clauses and maximizes the sum of the weight
of satisfied soft clauses. It is trivial to see possible special cases.
CHAPTER 1. INTRODUCTION 5
1.2 Contributions of this dissertation
We propose to combine the two major approaches of solving Max-SAT in a
way to solve more instances of real world problems by using inference rules of
branch and bound algorithms to pre-process instances for a translation based
Max-SAT solver. We are expecting that the translation based solver will be
able to solve more real world instances in the same time limit, given that the
instances are simplified by the inference rules.
The inference rules in the pre-processor are all manually implemented, i.e., it
was not used any available implementation. In order to use more efficient infer-
ence rules, we adapted some well known rules from the literature. These simpli-
fications are detailed in section 4.1 and include hyper resolution inference rules.
Hyper resolution rules are complex inference rules that are composed by various
inference rules. The rules that we adapted, chain and cycle resolution[11], were
originally described for weighted Max-SAT. Because we will be using a partial
Max-SAT solver, msuncore, as our translation based solver, we simplified and
adapted these rules for partial Max-SAT.
1.3 Organization of this dissertation
Section 2 recapitulates the SAT problem and the techniques used to solve it.
Section 3 has a brief introduction to the different types of Max-SAT solvers. In
section 3.2 we describe an approach used by most Max-SAT solvers, the branch
and bound scheme. This scheme can be improved with better lower bounding
techniques described in section 3.2.1 and section 3.2.2. Besides this scheme,
there is another technique, described in Section 3.3, that translates Max-SAT
instances into other problems. Section 4 introduces the concept of combining
approaches of branch and bound with translation based techniques to solve
specific Max-Sat problems. Section 5 evaluates the number of instances solved
with our proposed technique. Section 6 summarizes the conclusions taken from
the tests and the future work that can be made to better tackle real world
Max-SAT problems.
While sections 2 and 3 describe the current state of both SAT and Max-SAT,
CHAPTER 1. INTRODUCTION 6
in section 4 we will describe our contribution: adapt efficient inference rules
from branch-and-bounds solvers and use them as a pre-processor for translation
based solvers. As a result, we are expecting to solve more instances of real world
problems. Both the testing done and setup required are described in section 5
and the conclusions are presented in section 6.
2
SAT Algorithms
In this chapter we will present the SAT problem and its most important algo-
rithms. We first start introducing some definitions and concepts of SAT that are
used along this thesis in section 2.1. After introducing the problem, in section
2.2 we review some of the simplification rules used in SAT algorithms to sim-
plify a SAT instance. In section 2.3, we will cover the original Davis-Putnam[12]
(DP) algorithm, and in Section 2.4 the Davis-Logemann-Loveland[13] (DLL) al-
gorithm and its improvements. These backtracking algorithms remain the basis
for most of state-of-the-art solvers for SAT problems. They are able to iden-
tify unsatisfiable instances, unlike local search algorithms. In section 2.5, we
introduce local search algorithms to solve SAT instances, and the two most
prominent algorithms, GSAT[14] and WalkSAT[15].
2.1 Definitions and concepts
As stated in section 1.1.2, the problem of Boolean Satisfiability (SAT) is to
determine whether if given a CNF formula φ, there exists some assignment to
its variables that makes the formula evaluate to TRUE, in which case it is said
that the formula is satisfiable; if otherwise it is unsatisfiable.
A conjunctive normal form (CNF) formula is a conjunction of clauses, also
7
CHAPTER 2. SAT ALGORITHMS 8
represented as a set of clauses. Each clause is a disjunction of literals, where
a literal is represented as a variable x or its negation x. The complement of a
literal l is represented as the negation of its variable; so if a literal l is represented
as x, the complement of l is l and it is represented as x. The size of a clause is
determined by the number of literal in the clause.
A literal can be be assigned a truth value, which is TRUE if the variable
is a positive literal or FALSE if is a negated literal, e.g., x. An assignment
satisfies a clause if at least one literal in the clause is satisfied. A CNF formula
is satisfiable if there is an assignment that satisfies all its clauses; otherwise, it
is unsatisfiable. An assignment to a CNF formula is complete if all the variables
have a truth value assigned; otherwise is partial. A complete assignment that
satisfies a CNF formula is called a model. The problem of SAT is to determine
if there is a model for a CNF formula. Two SAT instances are equivalent if they
share the same set of models.
Unassigned literals in a clause are called free literals. A clause with one free
literal is called unit, with two literals binary, and so on. A clause with no free
literals, an empty clause, can not be satisfied and is represented by �.
Applying an assignment A to a φ formula can be represented as A(φ). The
result of applying an assignment to formula is the replacement of the literals in
the formula for the truth values of the assignment, which if it is complete yield
a truth result; TRUE if it satisfied, FALSE otherwise.
Example 2.1.1 Given the CNF formula φ = c1 ∧ c2 ∧ c3, with clauses c1, c2
and c3:
c1 = (x1)
c2 = (x2 ∨ x3)
c3 = (x1 ∨ x3)
Using the assignment A1 : {x1 = TRUE} on φ or A1(φ), will produce
a partial assignment, where c1 is satisfied, c2 is unmodified and c3 will have
only one free literals becoming a unit clause. If we were to use the assignment
A2 : {x1 = TRUE, x3 = TRUE}, clause c3 would become empty, and so,
CHAPTER 2. SAT ALGORITHMS 9
unsatisfiable, as also the formula φ. The assignment A3 : {x1 = TRUE, x2 =
TRUE, x3 = FALSE}, would satisfy all clauses in φ, i.e., A3(φ) = TRUE
2.2 Simplification Rules
2.2.1 Unit Propagation
Unit Propagation(UP) is applied to unit clauses. Because the SAT problem
requires than at least one literal is satisfied per clause, the only option in a unit
clause is to satisfy the only free literal it contains. The assignment of the literal
in the unit clause causes all appearances of the literal to be removed from the
formula.
Example 2.2.1 Given the formula φ from example 2.1.1, UP can be applied
to clause c1 = (x1), resulting in the assignment A1 : {x1 = TRUE}. Applying
A1(φ) will produce φ′ = {x2∨x3, x3}, being c1 satisfied and c3 partially assigned
with one free literal. With φ′, we can also apply UP to the second clause x3, with
assignment A2 : {x3 = FALSE}. When applied to φ′, both remaining clauses
are satisfied and so A2(φ′) = TRUE. Therefore, we could solve the SAT problem
on φ, using assignment A : {x1 = TRUE, x2 = TRUE, x3 = TRUE}, where
x2 could have either truth value.
2.2.2 Pure Literal Rule
If a literal appears with only one polarity in a formula, i.e., its complement
does not appear in the formula, it is said that the literal is pure. This literal
can be satisfied without affecting the satisfiability of the formula.
Example 2.2.2 Given the formula φ in example 2.1.1, variable x3 only appears
with the literal x3, so the clauses where it appears can be satisfied with the
assignment A1 : {x3 = FALSE}. A1(φ) will satisfy clauses c2 and c3. The
remaining clause has also a pure literal x1, and so, the resulting assignment
A2 : {x1 = TRUE, x3 = FALSE}, satisfies the formula φ.
CHAPTER 2. SAT ALGORITHMS 10
2.2.3 Resolution Rule
Resolution rule is an inference rule that provides a complete proof system by
refutation. It takes a pair of clauses that share a literal with opposite signals,
i.e. a literal and its complement, to derive a new clause called resolvent. The
resolvent is a disjunction of all the remaining literals present in both original
clauses. If there were none, it is derived an empty clause. Formally, if C and D
are clauses on a SAT instance that share a literal x with opposing signal, then
(C ∪ {x}) ∪ (D ∪ {x} = (C ∪D)).
Example 2.2.3 Given the formula φ in example 2.1.1, we could apply the res-
olution rule to clauses c1 and c3, deriving the resolvent clause (x3). Note that
this new clause can be used for UP afterwards.
If we apply the resolution rule iteratively to a formula, we can produce a
proof when either an empty clause is generated, or the resolution rule can not
be further applied, i.e., all the literal present in the formula are pure. When one
of the scenarios occurs, the formula is satisfiable or unsatisfiable, respectively.
2.3 Davis-Putnam Algorithm
The Davis-Putnman(DP) algorithm developed by Davis and Putnam[12] was
the first method to use resolution to solve the SAT problem. It consists of
iteratively applying the resolution rule to pairs of clauses with a variable x, until
the corresponding literal l does not appear in the formula. For each variable
removed, a quadratic number of clauses can be generated. The order in which
the varialbe is chosen is relevant to the size and number of clauses generated
until the procedure is over.
The DP algorithm is augmented with unit propagation and the pure literal
rule in each iteration to simplify and possibly solve the formula. Algorithm 1
corresponds to the pseudo-code for DP. UnitPropagation and PureLiteralRule
are functions that encapsulate the action of applyng the respective rules to the
given formula.
Auxiliar functions unitPropagation and pureLiteralRule take a φ formula
CHAPTER 2. SAT ALGORITHMS 11
Algorithm 1: Davis-Putnam algorithm for SAT
Input: DP(φ) : A CNF formula φOutput: TRUE if φ is satisfiable, FALSE if unsatisfiableφ ←− unitPropagation(φ);1
φ ←− pureLiteralRule(φ);2
if φ = ∅ then3
return TRUE;4
if ∃Ci ∈ φ : Ci = ∅ then5
return FALSE;6
l←− selectLiteral(φ);7
return DP(variableElimination(φ,l));8
and apply if possible simplification rules, unit propagation and pure literal,
respectively, returning the simplified formula. selectLiteral chooses the literal
present in a formula that is present in the minimum number of clauses.variableElimination
takes a formula φ and a literal l, to apply resolution rule to all clauses with literal
l, until no more clauses with literal l exists in φ.
2.4 Davis-Logemann-Loveland Algorithm
Most current SAT solvers are augmentations of the original backtracking al-
gorithm of Davis, Logemann and Loveland[13] (DLL). The backtrack procedure
consists of subdividing the problem into two subproblems. The search space
needed to enumerate all possible assignments is 2n, where n is the number of
literals in the formula.
DLL constructs a binary search tree of assignments. As it explores in a
depth-first manner, the assignment starts empty and, at each branch, a literal
is chosen to be given a truth value to one branch and its complement to the
other branch. If an assignment satisfies the formula, then the original formula
is also satisfiable; otherwise it is chosen a new literal and this step is repeated.
Unsatisfiability is only proved when all possible paths are taken.
There are also simplification steps as in DP algorithm. DLL uses unit prop-
agation and the pure literal rule at each split.
Also several improvements to the DLL backtrack search algorithm have been
introduced. These features have been useful to solve SAT instances from real-
CHAPTER 2. SAT ALGORITHMS 12
Algorithm 2: Davis-Logemann-Loveland algorithm for SAT
Input: DLL(φ) : A CNF formula φOutput: TRUE if φ is satisfiable, FALSE if unsatisfiableφ ←− unitPropagation(φ);1
φ ←− pureLiteralRule(φ);2
if φ = ∅ then3
return TRUE;4
if ∃Ci ∈ φ : Ci = ∅ then5
return FALSE;6
l←− selectLiteral(φ);7
return DLL(φl) OR DLL(φl);8
world problems.
2.5 Local search
Local search algorithms can solve extremely large satisfiable instances of
SAT. These algorithms have also been shown to be very efficient on randomly
generated instances of SAT. The main problem with these techniques is the lack
of completeness of this algorithms. These local search algorithms are typically
incomplete, that is, there is no guarantee that an existing solution will be found,
and if no solution exists that fact can never be determined with certainty.
Local search is typically applied to satisfiable problem instances, that is,
instances where there is at least one solution. Local search aims to find an
assignment of truth values to the variables of the problem, such that it evaluates
to true.
Local search algorithms start typically with some randomly generated com-
plete assignment and try to find a satisfying assignment by iteratively changing
the assignment of one propositional variable. Each change of the assignment of
a variable is called a variable flip, and variables are selected heuristically. Such
changes are repeated until either a satisfying assignment is found or a pre-set
maximum number of changes is needed. This process is repeated as needed, up
to a pre-set number of times. Usually, Local search algorithms do not explore
the entire search space, and a given assignment may be considered more than
once.
CHAPTER 2. SAT ALGORITHMS 13
Algorithm 3 provides a pseudo-code for a basic local search algorithm for
SAT. It starts by making a random assigning over all variables. It proceeds to
test the satisfiability of the assignment, if it is not a variable is chosen randomly
or with a heuristic with the function selectV ariableAssignment. The chosen
variable is flipped in the assignment. This means that the polarity of variable
is inverted in the assignment.
Algorithm 3: Basic local search algorithm for SAT
Input: LocalSearch(φ) : A CNF formula φOutput: Assignment A of φ if solution is found, else ’no solution found’for 1 to maxTries do1
A ←− initialAssignment(φ);2
for 1 to maxSteps do3
if satisfies(A, φ) then4
return A;5
else6
x, V ←− selectVariableAssignment(φ, A);7
A ←− A /{x = V } ;8
A ←− A ∪{x = V } ;9
else10
return ’no solution found’ ;11
The main difference between different local search algorithms is the im-
plementation of the function selectV ariableAssignment. Furthermore, these
search methods can visit the same location in the search space more than once
and they can get stuck in a small number of locations from which they can-
not get out. These locations also called local minima, require special escape
strategies such as random restart to continue. Random restart randomizes the
assignment if no solution was found in a determined number of steps.
Example 2.5.1 Given a SAT instance φ = {(x1 ∨ x2), (x3 ∨ x1), (x4)}. A lo-
cal search begins by choosing the random assignment A = {x1 = TRUE, x2 =
TRUE, x3 = FALSE, x4 = TRUE}. Because A does not satisfies φ, a vari-
able is chosen to be flipped, this case x1. The new assignment A = {x1 =
FALSE, x2 = TRUE, x3 = FALSE, x4 = TRUE} now satisfies φ and A is
returned.
As it can be seen in example 2.5.1 this strategy is infective due to the random-
CHAPTER 2. SAT ALGORITHMS 14
ness of the variables chosen to be flipped. There is no clue that the change ap-
proximates the assignment to satisfiability. The two more common local search
algorithms that try to overcome this difficulty are GSAT[14] and WalkSAT[15]
presented in the following subsections.
2.5.1 GSAT Algorithm
The GSAT[14] algorithm was one of the first local search algorithms. The
strategy used to choose the flipping variable is to score a variable in the current
assignment. By doing this, it tries to maximize the number of satisfied clauses
by testing which variable if flipped would increase the most the number of
satisfied clauses. This technique is greedy and does not guarantee that there
will be always variables who increase the number of satisfied clauses, in fact, it
can happen that none of the variable can be flipped to change the number of
satisfied clauses. This is called a sideway move, because it does not approximates
the assignment to a solution.
As noted previously, local minima is the main problem of local search algo-
rithms. Although sideways moves can help escape some local minima, it does
not help if the algorithms is on a plateau, a set of neighboring states each with
an equal number of unsatisfied clauses.
2.5.2 WalkSAT Algorithm
The WalkSAT algorithm was introduced in [15]. The process it uses to
pick a flipping variable is different of GSAT, in that, it first chooses a random
unsatisfied clause and then using a heuristic function the variable in such clause
to be flipped. This heuristic function is its most simple form can be as: if
any variable that violates the clause can be flipped without violating any other
clause, then a random variable can be picked. If else, it is chosen the variable
that minimizes the number of unsatisfied clauses if the variable were picked, but
that is not satisfied.
Although WalkSAT seems very close of GSAT, GSAT chooses the flipping
variable from all the available variables and WalkSAT from a smaller set in a
unsatisfied clause. Their approaches are different that can be reflected in the
CHAPTER 2. SAT ALGORITHMS 15
general superiority of WalkSAT over GSAT.
2.6 Summary
In this section, we have introduced an overview of the problem of satisfiabil-
ity. First, we first introduced the problem of SAT and its notations. In second,
we covered the most used complete algorithms for SAT, DP and DLL, as well
some recent developed techniques in SAT solving that help this algorithms. In
third and finally, another approach for solving SAT, local search was presented,
along its most predominate algorithms.
3
Max-SAT Algorithms
In this chapter we will present the problem of Max-SAT, and its most im-
portant algorithms.
The most popular technique for exact Max-SAT solvers is the branch and
bound technique, that will be further explained in the next subsection. Also
in another subsection we will explain another approach that has been found to
be effective mostly on solving real-world problems, which it consists efficiently
translating Max-SAT instances to other problem instances, such as SAT, where
they can be solved with state-of-the-art solvers designed for the SAT problem.
3.1 Definitions and concepts
As introduced in section 1.1.3, the problem of Max-SAT is an optimization
version of the SAT problem. We previously introduced it as the maximization
of the number of clauses satisfied, but it can be seen as the minimization of the
number of unsatisfiable clauses. Although they are equivalent, the minimization
will be used throughout this dissertation.
Max-SAT instances also use the CNF formulation, as in the SAT problem.
For reference see section 2.1. One particular difference between CNF formulation
in SAT and Max-SAT, is that, in Max-SAT a formula is a multi-set of clauses,
16
CHAPTER 3. MAX-SAT ALGORITHMS 17
meaning it can have duplicated clauses in its formula, as opposed to SAT.
Example 3.1.1 Given the formula φ = {x2, x2, x2, x1, x1 ∨ x2} which is an
instance of Max-SAT, the first or second clauses would be redundant in SAT as
the resulting formula would be equivalently unsatisfiable, but in Max-SAT it has
a minimum of two unsatisfiable clauses and so neither clauses can be removed.
Contrary to SAT, also is the equivalence of two instances. In Max-SAT,
two instances are said equivalent if they share the same number of unsatisfied
clauses for every complete assignment.
There are several extensions of Max-SAT that are more fitted for the ex-
pression of over-constrained problems. We will oversee three variants: weighted
Max-SAT, partial Max-SAT and weighted partial Max-SAT.
Weighted Max-SAT is a Max-SAT variant in which each clause is a weighted
clause. This new CNF formulation, WCNF, is a conjunction of weighted clauses.
A weighted clause is a pair (C,w), where C is a disjunction of literals, such
as in CNF, and w its the weight associated with the clause represented as
a positive number. The problem of weighted Max-SAT is that of finding a
complete assignment that maximizes the sum of weights of satisfied clauses, or
equivalently, that minimizes the sum of weights of unsatisfied clauses.
Example 3.1.2 Given the weighted Max-SAT instance φ = {(x1, 1), (x2, 1), (x1∨
x2, 3)}, an assignment that minimizes the sum of weights of unsatisfied clauses
is A : {x1 = FALSE, x2 = TRUE}. A(φ) results in one, the first clause being
falsified.
The original Max-SAT problem can be seen as a subset of the weighted
Max-SAT problem where all weighted clauses have weight of one.
In partial Max-SAT each clause of a CNF instance is either soft or hard, thus
represented with parenthesis or brackets, respectively. The problem of partial
Max-SAT is of finding a complete assignment that maximizes the number of
soft clauses satisfied, while all hard clauses are satisfied. This problem can be
generalized to a weighted Max-SAT problem where soft clauses, have weight of
one and the hard clauses have a weight that is larger than of the weight of the
sum of soft clauses.
CHAPTER 3. MAX-SAT ALGORITHMS 18
Example 3.1.3 Given the partial Max-SAT instance φ = {[x1], (x2), [x1∨x2]},
the assignment that satisfies the hard clauses (the first and third clauses) and
minimizes the number of unsatisfiable soft clauses (the second clause is A :
{x1 = TRUE, x2 = False}).
Weighted partial Max-SAT is the combination of partial Max-SAT, where
clauses are hard or soft, and Weighted Max-SAT, where soft clauses have a
weight value. Again, the hard clauses can be represented with a weight greater
than the sum of weights of soft clauses. Also it can be superset of both partial
Max-SAT and weighted Max-SAT, where soft clauses have weight one, or there
are no hard clauses, respectively.
Example 3.1.4 Given the weighted partial Max-SAT instance φ = {(x1, 1), (x2, 3), [x1∨
x2]}, an assignment that minimizes the sum of weights of unsatisfiable soft
clauses, while satisfying the hard clauses in A : {x1 = FALSE, x2 = TRUE}.
For some future examples it can be useful to have an alternative definition
for weighted partial Max-SAT that can be adapted to either partial Max-SAT
and weighted Max-SAT. Instead of having soft and hard clauses, all clauses have
a weight associated and hard clauses have an upper bound weight of T . > is
always a positive natural number, if a tight upper bound is not known, > is set
to a number higher than the sum of weights of all clauses. This format is what
is used in real instances.
With this format in mind, a few definitions can be useful for expressing
Max-SAT rules. Let u and w be weights, where u >= w.
The sum of weights can be defined as:
u⊕ w = min{u+ w,>}
The subtraction of weights as:
u w =
u− w, u 6= >
>, u = >
Example 3.1.5 Given the weighted partial Max-SAT instance in the previous
example, in this new format it would be φ = {(x1, 1), (x2, 3), (x1∨x2,>)}, where
T = 5.
CHAPTER 3. MAX-SAT ALGORITHMS 19
Any empty clause with weight below > can be seen as an explicit lower
bound to the optimal solution. If an empty clause has weights > or more, it
can be said that the formula is unsatisfiable.
One of the problems that has deterred the development of efficient Max-SAT
algorithms has been the lack of modern techniques such as the ones that SAT
solvers have. Most SAT solvers rely on the DPLL algorithm presented in section
2.4 to efficiently simplify the formula, using rules such as unit propagation. Not
only does it reduces the formula complexity, but it can also be used to test the
unsatisfiability of the formula with respect a partial assignment. In the Max-
SAT problem it is not enough to prove satisfiability as in the SAT problem. One
has to prove that the new rule is sound, where the new inferred formula must
have the same number of satisfied clauses for any possible variable assignment.
This condition is more strict than SAT.
Example 3.1.6 Consider the Max-SAT instance φ = {x1, x1 ∨x2, x1 ∨x2, x1 ∨
x3, x1 ∨ x3}. If we were to apply unit propagation like in SAT with assignment
A : {x1 = TRUE}. This partial assignment originates two empty clauses, on
contrary if we used the assignment A′ : {x1 = FALSE} only one empty clause
would appear.
3.2 Branch and Bound Algorithms
One of the main techniques used by exact Max-SAT solvers is branch and
bound (BnB), e.g. Lazy[16, 17, 18], MaxSatz[19, 20, 21], Clone[22], LB-Sat[23,
24], MinMaxSat[25, 26], PMS[27], SR(w)[28], Max-DPLL[11]. Branch and
Bound (BnB) is an algorithm paradigm used for solving combinatorial opti-
mization problems, first introduced in [29] for linear programming. Throughout
this section, we will tackle the Max-SAT problem as the minimization of unsat-
isfiable clauses, for the sake of simplicity.
BnB for Max-SAT works as follows: BnB explores the search space as a
binary tree, where the root is the Max-SAT instance φ, the left child node is
φ with the variable x set to FALSE (or TRUE) and the right child node is φ
with x set to TRUE or FALSE. BnB explores the search space in a depth-
CHAPTER 3. MAX-SAT ALGORITHMS 20
first way and at each node compares the best solution found so far, the upper
bound (UB), with the number of unsatisfied clauses plus an underestimation
of the number of unsatisfiable clauses, the lower bound (LB). If LB ≥ UB, it
means that this current assignment is worst than the current best, and so it
can be pruned and the algorithm backtracks in the search tree. If otherwise
LB < UB, BnB recursively applies the algorithm, assigning a Boolean value to
a variable in the left branch and its negation to the right branch and so on.
The algorithm stops when the whole valid search tree is explored, returning the
minimum number of unsatisfiable clauses of the original input formula, which
is in the UB variable.
Algorithm 4: Basic branch and bound algorithm for Max-SAT
Input: max-sat(φ,UB) : A CNF formula φ, an upperbound UBOutput: The minimal number of unsatisfied clauses of φφ ←− simplifyFormula(φ);1
if φ = ∅ or ∀Ci ∈ φ : Ci = ∅ then2
return #emptyClauses(φ);3
LB ←− #emptyClauses(φ) + underestimation(φ, UB);4
if LB ≥ UB then5
return UB;6
x←− selectLiteral(φ);7
UB ←− min(UB,max-sat(φx,UB));8
return min(UB,max-sat(φx,UB));9
Algorithm 4 contains the pseudo-code of a basic solver for Max-SAT as in[21].
The following notation is used:
• simplifyFormula(φ) is a function that applies inference rules to simplify
φ and returns the simplified instance.
• #emptyClauses(φ) is a function that returns the number of empty clauses
in φ.
• LB is a lower bound of the number of unsatisfied clauses in φ. We can
assume that it is 0 at start.
• underestimation(φ) is a function that returns an underestimation of the
number of empty clauses in φ.
CHAPTER 3. MAX-SAT ALGORITHMS 21
• UB is an upper bound of the number of unsatisfied clauses in φ. We can
assume that its initial value is the total number of clauses in the formula.
• selectLiteral(φ) is a function that returns a literal from φ using a heuristic
to select the variable.
• φx is the formula φ with the literal x set to TRUE.
• φx is the formula φ with the literal x set to FALSE.
The current Max-SAT solvers that use the BnB algorithm take this basic
form and add to it some advanced techniques. Mostly implementing the func-
tions in Algorithms 4 that are not defined.
3.2.1 Underestimations
As seen in Algorithm 4, the simplest way to compute the lower bound is
to count the number of empty clauses of one instance. A more sophisticated
method present in [30] is to count the number pairs of unit clauses that contra-
dict with each other.
Example 3.2.1 Given the Max-SAT instance φ = {�, x1, x1, x2, x1 ∨ x2}, the
number of inconsistent unit clauses is one, corresponding to the unit clauses
with the literals x1 and x2. Added to the number of empty clauses, also one, the
lower bound of unsatisfied clauses by underestimation would be 2.
Counting unit clauses is of no use in longer clauses. The star rule and UP
can be used in more situations.
The star rule[31, 32] uses disjoint inconsistent subformulas with the pattern
{x1, ..., xk, x1 ∨ ... ∨ xk} to increment the lower bound. It can be seen that if k
is one, then this rule is equivalent to the previous one.
Example 3.2.2 Given the Max-SAT instance φ = {x1, x2, x1 ∨ x2}, the pat-
tern of the star rule appears with k = 2 and the literals x1 and x2. So the
underestimation can be incremented.
UP[19] underestimates the lower bound by detecting the number of dis-
joint inconsistent subformulas found after unit propagation. It works as follows:
CHAPTER 3. MAX-SAT ALGORITHMS 22
First applies unit propagation until a contradiction is derived. Then, UP con-
structs a refutation proof from the clauses used to produce the contradiction.
By doing so it can increment the underestimation, remove the clauses from the
refutation proof and repeat UP until there are no more conflicts. The order
in which unit clauses are propagated has a clear impact on the quality of the
underestimation[20].
Example 3.2.3 Given the Max-SAT instance φ = {x1, x3, x1∨x2, x1∨x2, x3∨
x1}, we can propagate the first clause with the assignment A = {x1 = FALSE}.
Through A(φ), the inconsistent subset {x1, x1 ∨ x2, x1 ∨ x2} can be derived, re-
moved and UP can be reapplied with the assignment A′ = {x1 = FALSE, x3 =
TRUE}, where the inconsistent subset {x3, x3} is detected. Therefore the un-
derestimation of the lower bound can be increment by 2.
UP with failed literals[20] works as follows: If there is a literal xi occurring in
a Max-SAT instance φ, then we apply UP in φ1 = φ∪{xi} and in φ2 = φ∪{xi}.
If there is a contradiction derived in φ1[xi], naming the resulting set of clauses
ϕ1 and a contradiction in φ2[xi], naming the resulting set of clauses ϕ2, then
(ϕ1 ∪ ϕ2)\{xi, xi} is an inconsistent subformula in φ and the underestimation
can be incremented. Therefore this technique can be used in the absence of
unit clauses, while UP can only identify unit refutations. We use UP to derive
a contradiction because of the unit clause inserted. Since applying detection of
failed literals for every variable is time consuming, it is applied to a reduced
number of variables in practice.
Example 3.2.4 Given the Max-SAT instance φ = {x1∨x2, x1∨x2, x1∨x2, x1∨
x2}, UP can not be applied due to the lack of unit clauses. But, if we apply UP
with failed literals as follows: in φ1 = φ ∪ {x1}, apply φ1[x1] and derive the
contradiction ϕ1 = {x1 ∨ x2, x1 ∨ x2, x1}. In φ2 = φ ∪ {x1}, apply φ2[x1] and
derive the contradiction ϕ2 = {x1∨x2, x1∨x2, x1}. The subformula resulting on
the union of ϕ1 and ϕ2 without the inserted unit clauses: {x1 ∨x2, x1 ∨x2, x1 ∨
x2, x1 ∨ x2} is a contradiction in φ and the underestimation can be increased.
Another version of UP[24] is also incremental in the computation of the lower
bound, but guarantees that the lower bound computed is bigger than the lower
CHAPTER 3. MAX-SAT ALGORITHMS 23
bound computed at the parent node. More specifically, at each branch point in
the search-tree, the current node is enabled to inherit inconsistent subformulas
from its parent node. Moreover, after branching the inconsistencies may shrink
by applying unit propagation to them, and such process increases the probability
of getting better lower bounds.
3.2.2 Inference
As stated previously it is not possible to use traditional SAT inference rules
in Max-SAT. A complete resolution for Max-SAT is presented in [33]. Although
it can not be used in efficient solvers, it is possible to infer some more useful
rules when branching in BnB solvers. These rules are also called transformation
rules, as they replace the premises of the rule with the conclusion. If they were
to just add the conclusions, like in SAT inference, it could increase the number
of falsified clauses, making the inference not sound.
The basis for using inference is to transform a Max-SAT instance into an
equivalent but simpler one. This transformation can induce the discovery of
empty clauses and by doing it, increasing the lower bound. The advantage of
inference over underestimations, is that all computation may be stored. So all
the empty clauses discovered using inference do not have to be recomputed.
This makes inference more incremental that underestimations.
Overall, most inference rules in Max-SAT can be reduced to three categories:
single resolution, variable elimination and hyper-resolution. We will overview
most of the inference rules used in modern Max-SAT solvers.
Single resolution
The unit clause reduction[34], is a direct extension of the unit propagation
used in SAT. If there is a hard clause that is unit, (l,>), then all occurrences
of l in other clauses can be removed. It is simple to see that the unit clause
must be satisfied and so l must also be satisfied. Formally {(l,>), (l ∨C,w)} ≡
{(l,>), (C,w)}, where C is a disjunction of literals.
Example 3.2.5 Given the partial weighted Max-SAT instance φ = {(x1 ∨
CHAPTER 3. MAX-SAT ALGORITHMS 24
x2, 1), (x2, 1), (x1,>)}. The unit clause reduction rule would transform the first
clause and would result in the formula φ′ = {(x2, 1), (x2, 1), (x1,>)}.
Absorption[34] takes on the same principle of hard clauses being mandatory.
If a hard clause is a subset of other clause, then the other clause is removed,
given that the hard clause must be satisfied and doing so satisfies any clause
with the same or more literals. {(C,>), (C ∨D), w)} ≡ {(C,>)}, where D is a
disjunction of literals.
Example 3.2.6 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x2, 1), (x2, 1), (x1,>)}, absorption would remove the first clause and result in
the formula φ′ = {(x2, 1), (x1,>)}.
Neighbour resolution[35] is a particular case of Max-SAT resolution where
both clauses share a set of literals. Formally {(l∨C, u), (l∨C,w)} ≡ {(C,w), (l∨
C, u w)}.
Example 3.2.7 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x2,>), (x2 ∨ x1, 1), (x1, 1)}, neighbour resolution can be applied to the first two
clauses of φ with respect to literal x2. Applying it would result in the formula
φ′ = {(x1 ∨ x2,>), (x1, 1), (x1, 1)}.
Variable resolution
The pure literal rule was already present in section 2.2.2 for SAT. It was
proposed for Max-SAT in [36] and its principle is the same as in SAT. If a
literal appears with only one polarity in a formula, i.e., its complement does
not appear in the formula, it is said that the literal is pure. This literal can be
satisfied without affecting the number of unsatisfiable clauses.
Example 3.2.8 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x2,>), (x1 ∨ x3, 1), (x2 ∨ x3, 1)}, the literal x1 is pure in φ. If we apply the
pure literal rule, the partial assignment A : {x1 = TRUE} could be safely ap-
plied and A(φ) would result in the formula φ′ = {(x2 ∨ x3, 1)}.
The elimination rule[36] states that if a literal l occurs in a formula in only
two clauses with different polarities, if both variables had the same polarity we
CHAPTER 3. MAX-SAT ALGORITHMS 25
could apply the pure literal rule), then both clauses can be merged into one
without the literal l. Formally, if φ = {(l ∨C, u), (l ∨D,w)} ∪ φ′, where φ′ does
not contain the literal l, then φ = {(C ∨D,min(u,w))} ∪ φ′.
Example 3.2.9 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x4, 1), (x1 ∨ x3 ∨ x2, 1), (x2 ∨ x3x4,>)}, the elimination rule can be applied
to literal x1 in the first two clauses of φ. The resulting formula would be
φ′ = {(x2 ∨ x3 ∨ x3, 1), (x2 ∨ x3x4,>)}.
Hyper-resolution
Hyper resolution in the context of SAT is a resolution-based rule concepts
in which various resolution steps are compressed into one.
The star rule, already mentioned in section 3.2.1 can be extend to a hyper-
resolution rule[37]. It identifies a clause of length k such that each of its literals
appears negated in a unit clause. Formally,
(l1 ∨ l2 ∨ · · · ∨ lk, w),
(li, ui)1≤i≤k
≡
(l1 ∨ l2 ∨ · · · ∨ lk, w m),
(l1 ∨ (li+1 ∨ li+2 ∨ · · · ∨ lk),m)1≤i≤k,
(li, ui m)1≤i≤k,
(�,m)
,
where m = min{w, u1, u2, . . . , uk}.
Example 3.2.10 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x2,>), (x1, 1), (x2, 1)}, the star rule with k = 2 and the literals x1 and x2 can
be applied to φ. The resulting formula φ′ = {(x1 ∨ x2,>), (x ∨ x2, 1), (�, 1)}
includes a empty clause that increments the lower bound by one.
The dominating unit clause rule[38], can be seen as the unit propagation
rule for Max-SAT that is sound. If there are the same or more number of unit
clauses where the literal l appears in φ than the number of clauses where l
appears, then l can be safely assigned to φ.
Example 3.2.11 Given the partial weighted Max-SAT instance φ = {(x1, 2), (x1∨
x2, 1), (x1 ∨ x2, 1)}. φ can be safely assign the partial assignment A : {x1 =
TRUE} due to the dominating unit clause rule.
CHAPTER 3. MAX-SAT ALGORITHMS 26
Chain resolution is an original rule from [11] that captures a chain between
two unit clauses using a subset of binary clauses. This rule allows to derive an
empty clause. Formally,
(l1, u1),
(li ∨ li+1, ui+1)1≤i<k,
(lk, uk)
≡
(li,mi mi+1)1≤i≤k,
(li ∨ li+1, ui+1 mi+1)1≤i<k,
(li ∨ li+1,mi+1)1≤i<k,
(lk+1, uk mk+1),
(�,mk+1)
,
where mi = min{u1, u2, · · · , ui} and ∀1≤i<j≤kli 6= lj .
Example 3.2.12 Given the partial weighted Max-SAT instance φ = {(x1, 1), (x3, 1), (x1∨
x2,>), (x2 ∨ x3, 1)}, we can apply chain resolution with k = 3. The resulting
formula derives a new empty clause: φ′ = {(x1 ∨ x2,>), (x1 ∨ x2,>), (x2 ∨
x3,>), (�, 1)}.
Chain resolution is equal to simple neighbour resolution with k = 1 and to
binary clause star rule with k = 2.
Cycle resolution is also an original rule from [11] that identifies a cycle of
binary clauses with the starting and ending literal of the cycle from a single
clause. The rule allows to derive only a unit clause, but it can be used with
other rules that use unit clauses to derive empty clauses. Formally,
(li ∨ li+1, ui)1≤i<k,
(l1 ∨ lk, uk)
≡
(l1 ∨ li,mi−1 mi)2≤i≤k,
(li ∨ li+1, ui mi)2≤i<k,
(l1 ∨ li ∨ li+1,mi)2≤i<k,
(l1 ∨ li ∨ li+1,mi)2≤i<k,
(l1 ∨ lk, uk mk),
(l1,mk)
,
where mi = min{u1, u2, · · · , ui} and ∀1≤i<j≤kli 6= lj .
Example 3.2.13 Given the partial weighted Max-SAT instance φ = {(x1 ∨
x3, 1), (x1∨x2,>), (x2∨x3, 1), (x1, 1)}, a new unit clause can be derived using cy-
cle resolution: φ′ = {(x1, 1), (x1∨x2,>), (x1∨x2∨x3, 1), (x1∨x2∨x3, 1), (x1, 1)}.
CHAPTER 3. MAX-SAT ALGORITHMS 27
The new unit clause can be used to find a new empty clause with neighbour res-
olution.
Cycle resolution can also degenerate to neighbour resolution with k = 1.
3.3 Translations
As previously mentioned, many SAT based techniques cannot be efficiently
used in Max-SAT solvers. The main reason is the soundness requirement in the
Max-SAT problem, already discussed in section 3.1. Due to the mature nature
of SAT solvers, an efficient translation from Max-SAT problems to SAT may be
useful to solve certain types of Max-SAT instances.
It is also possible to translate to other problems, such as the Pseudo-Boolean
Optimization problem(PBO). PBO is an know NP-Complete problem, that is
a special case of integer linear programming where the variables are assigned
Boolean values.
3.3.1 PBO translation
One alternative for translating Max-SAT instances into other problems is to
translate the instances into a PBO problem instance[39]. The PBO approach
to Max-SAT works as follows: Add a blocking variable bi to each clause Ci,
as Ci = Ci ∪ bi, where the new blocking variable allows to satisfy each clause
independently of other variable assignments. Then create a cost function that
minimizes the number of blocking variables assigned to TRUE.
Example 3.3.1 Given the Max-SAT instance φ = {x1, x2, x1 ∨ x2}, the PBO
instance translation would be φPBO = {x1 ∨ b1, x2 ∨ b2, x1 ∨ x2 ∨ b3}, and the
corresponding cost function min∑3
i=1 bi, where bi is the blocking variable at
each clause Ci.
Although simple to formulate, the PBO translation often does not scale well,
since that the number of new variable grows linearly with the number of clauses.
Note that with most instances as in the example 3.3.1, the number of blocking
CHAPTER 3. MAX-SAT ALGORITHMS 28
variables would be greater than the number of non-blocking variables, making
the search space of the PBO instance much larger than the original Max-SAT
space.
SAT translation
A similar approach to the one given to the PBO problem is also possible.
SAT4Jmaxsat[40] is a Max-SAT solver that translates a Max-SAT instance into
a SAT problem. It does so by, given the Max-SAT instance φ, adding a blocking
variable bi to each clause in φ, where i = |φ|. Then it uses SAT4J, a SAT solver
from the same author, that supports cardinality constraints, to solve the new
instance as a SAT problem. Each time the SAT solver finds an assignment A
that satisfies the formula, it adds a cardinality constraint∑n
i=1 bi < | blocking
variable with value TRUE |. When the formula can no longer be satisfied, the
number of blocking variables assigned with the value TRUE in the last iteration
is the solution for the original Max-SAT problem.
An approach to translate a Max-SAT instance problem to a SAT instance
problem and reducing the number of blocking variable, is to add blocking vari-
able only to the unsatisfiable subformulas (or unsatisfiable cores) of the Max-
SAT instance. Most SAT solvers can extract this unsatisfiable cores as a res-
olution refutation of the SAT problem. This approach was first introduced by
[41] and was further developed in [42], where it describes the algorithm from
[41] and adapts it to Max-SAT naming it MSU1 (originally it was only meant
to Partial Max-SAT).
Algorithm 5 works by first trying to solve the SAT problem for the formula
φW , the original input formula for the algorithm. While it cannot solve the
formula, the SAT solver extracts the unsatisfiable core φC(line 3) and sets the
Boolean variable st to FALSE. For each clause in the unsatisfiable core, adds
a new blocking variable b (line 9) to the non-auxiliary clauses, and requires that
the One-Hot constraint is enforced (line 12). The One-Hot constraint[41] states
that exactly one and only one of the new blocking variables must be assigned
value TRUE. This action updates the SAT formula, that so, when solved con-
tains the minimal number of unsatisfiable clauses in the number of satisfied
CHAPTER 3. MAX-SAT ALGORITHMS 29
Algorithm 5: The Max-SAT algorithm of Fu & Malik[41]
Input: msu1(φ) : A CNF formula φOutput: The maximum number of satisfiable clauses of φφW ←− φ;1
while TRUE do2
(st, φC) ←− SAT(φW );3
if st = FALSE then4
BV ←− ∅;5
foreach C ∈ φC do6
if C is not auxiliary then7
b is a new auxiliary variable;8
Cb ←− C ∪{b};9
φW ←− φW −{C} ∪ {Cb};10
BV ←− BV ∪{b};11
φB ←− CNF(∑b∈BV b = 1);12
φW ←− φW ∪ φB ;13
else14
ν ←− | blocking variables with value TRUE |;15
return |φ| - ν;16
blocking variables. The One-Hot constraint is implemented in a pairwise en-
coding for Equals1 constraints. It is implemented with two constraints, one
AtMost1 constraint and one AtLeast1 constraint. The problem with this en-
coding is that the AtMost1 constraint requires a quadratic number of auxiliary
clauses to encode the constraint.
Example 3.3.2 Given the Max-SAT instance φ = {x1, x2, x1 ∨ x2}, the SAT
solver detects the unsatisfiable core φC = {x1, x2, x1∨x2}. The solver adds three
new blocking variables to the formula, one for each clause. Also, the One-Hot
constraint must be enforced with auxiliary clauses, resulting in the the formula
{x1∨b1, x2∨b2, x1∨x2∨b3, b1∨b2, b1∨b3, b2∨b3, b1∨b2∨b3}. This formula would be
satisfied with the assignment A : {x1,= TRUE, x2 = TRUE, b3 = TRUE, b1 =
FALSE, b2 = FALSE}. The number of satisfied blocking variables, b3, is the
minimum number of unsatisfiable clauses.
MSU1 was later improved in [42] to use a different cardinality constraint
instead of the pairwise encoding. It also imposes an AtMost1 constraint to the
number of blocking variables in each clause. The cardinality constraint encoding
used in this new version was based on Binary Decision Diagrams (BDDs), which
CHAPTER 3. MAX-SAT ALGORITHMS 30
is linear in the number of variables in the constraint[43].
In MSU1, there are an unbounded number of blocking variables for any given
clause, one for each time a clause participates in an unsatisfiable core. For the
correct execution of the algorithm, only AtMost1 blocking variables must be
set TRUE, as proved in [42]. So, a simple method to prune the search space
associated with the blocking variables would be to add an AtMost1 constraint
to the working formula, such that, for each clause Ci with blocking variables
it would be added an AtMost1 constraint:∑
j bi,j ≤ 1, where j is the number
of blocking variables associated with the clause Ci. As this constraint is also
a AtMost1, it can use the BDD encoding as well. This new version of the
algorithm, named MSU2, was renamed to MSU1.1 in [44] for the sake of naming
convention. We will be using this late convention from now on.
One of the problems already mentioned of MSU1, is the fact that a clause
could have more than one blocking variable. A new proposal for this approach
was made in [42], that guarantees at most one blocking variable for each clause.
This was accomplished by solving the problem in two phases. In the first phase,
the algorithm extracts the disjoint unsatisfiable cores present in the CNF for-
mula and associates a blocking variable to each clause. The number of disjoint
unsatisfiable cores is the lower bound ν of the number of blocking variables
needed to be set to TRUE, or equivalently, the lower bound of unsatisfiable
clauses. To assure that is only needed one blocking variable for each clause it
adds a different cardinality constraint,∑
b∈BV b = ν, where BV is the number
of clauses associated with a blocking variable, b is the blocking variable present
in the clause, and ν is the current lower bound for the number of blocking vari-
ables needed to be set to TRUE. The second phase is similar to the original
algorithm 5, in which it iteratively extracts new unsatisfiable cores until the
formula is satisfiable. The difference lies in the fact that only clauses without
a blocking variable, that is part of a new unsatisfiable core, get a new blocking
variable, guarantying this way that, at most one blocking variable is present in
each clause. For each new unsatisfiable core found, the algorithm increments
the lower bound already mentioned and updates the cardinality constraint as-
sociated with it. When the algorithm ends, i.e., when the resulting formula is
satisfiable, the lower bound ν is actually the minimum number of unsatisfiable
CHAPTER 3. MAX-SAT ALGORITHMS 31
clauses. This version was named MSU3, and uses also a BDD-based encoding.
The main drawback of this version is its blocking variable encoding, that is more
complex then the AtMost1 version used by MSU1 and MSU1.1.
The last iteration of the MSU algorithm is MSU4[45], that besides using
single cardinality as in MSU3, also reduces the complexity of the cardinality
encoding, and adds an upper bound for the number of blocking variables needed
to be set TRUE, besides the lower bound ν that MSU3 already used. The
description of the algorithm state as follows.
Algorithm 6: The MSU4 Algorithm[45]
Input: msu4(φ) : A CNF formula φOutput: The maximum number of satisfiable clauses of φφW ←− φ;1
µBV ←− |φ|;2
νU ←− 0;3
VB ←− ∅;4
UB ←− |φ|+ 1;5
LB ←− 0;6
while TRUE do7
(st, φC) ←− SAT(φW );8
if st = FALSE then9
φI ←− φC ∩ φ;10
I ←− {i|Ci ∈ φI};11
VB ←− VB ∪ I;12
if |I| > 0 then13
φN ←− {Ci ∪ {bi}|Ci ∈ φI};14
φW ←− (φW − φI) ∪ φN ;15
φT ←− CNF(∑
i∈I bi ≥ 1);16
φW ←− φW ∪ φT ;17
else18
return UB;19
νU ←− νU + 1;20
UB ←− |φ| − νU ;21
else22
ν ←− | blocking variables with value TRUE |;23
if µBV < ν then24
µBV ←− ν;25
LB ←− |φ| − µBV ;26
φT ←− CNF(∑
i∈VB bi ≥ µBV − 1);27
if LB = UB then28
return UB;29
This algorithm uses the same approach from the previous versions to solve
CHAPTER 3. MAX-SAT ALGORITHMS 32
the Max-SAT problem. First it translates the problem into a SAT problem
by adding blocking variables and cardinality constraints to make it equivalent.
Much like MSU3 it uses at most one blocking variable for each clause that
participates in an unsatisfiable core, but uses a different method to enforce
the cardinality of the blocking variables. Instead of using a∑
b∈BV b = ν
type of constraint like the one used in MSU3, it uses an AtMost1 constraint
like the one used in MSU1.1, that is significantly less complex(lines 10-21).
When an unsatisfiable core is found and extracts the clauses that participate
in the unsatisfiable core does not have any blocking variable(line 10). It then
adds blocking variables to those clauses(lines 11-15) and updates the cardinality
constraints(lines 16-17) or finishes otherwise (line 18-19). When it detects a new
unsatisfiable core, it also increments a counter νU that is used to calculate the
upper bound of minimum number of unsatisfiable clauses(lines 20-21). The
AtMost1 constraint with the single cardinality for blocking variables does not
guarantee that a SAT solution corresponds to a Max-SAT solution. So, when
the formula is found to be satisfiable, it must check if the lower bound of ν
is updated (lines 23-26) and adds cardinality constraints that enforce at most
µBV − 1 (line 27), where µBV is the minimum number of blocking variables
needed to be set to TRUE. The algorithm stops when the lower bound is equal
to the upper bound of blocking variables needed to be set to TRUE to satisfy
the formula.
3.4 Summary
In this section, we had an overview of Max-SAT and its variants. We intro-
duced different approaches of solving techniques, from the most common BnB
to the novel translation into other problems. This will be the basis for the rest
of this thesis, in focus, the gains of combining parts of this two techniques, to
best solve real-world instances.
4
Combining approaches
Although BnB is the most popular technique to solve Max-SAT instances,
branching on each literal in the formula, the search space quickly explodes.
On the other hand, the translation technique is useful when there are many
unsatisfiable subformulas.
It can be said that BnB is more suitable for random, homogeneous instances,
whereby algorithms such as msuncore perform best with industrial instances,
where there are many more variables, but the instances are much less random
and heterogeneous.
The focus of this section is to extend the approach used in the msuncore
algorithm, with some of the techniques used in BnB, more specifically inference
rules as a pre-processor of the instance.
At the time of writing this thesis, msuncore[44] focused in partial Max-SAT
instances, so that we will be focusing also in partial Max-SAT. We will also
be using the notation in 3.2.2, this time only for partial Max-SAT, instead of
weighted partial Max-SAT. The only difference is the weight of soft clauses
always being one.
33
CHAPTER 4. COMBINING APPROACHES 34
4.1 Efficient Inference
As stated previously, we will focus on efficient inference rules for partial
Max-SAT. The limitation of using only partial Max-SAT gives us more strict
rules. Also, at the time of the development of this document, the most well-
know translation based algorithm, msuncore, only supported partial Max-SAT
instances.
Single resolution and variable elimination, present in section 3.2.2 and 3.2.2
respectively, are simple enough. They will not be the focus of this simplification.
In hyper resolution, the inference rules are more complex and may use several
clauses and literals, hence, we will limit the scope of application not only to make
it usable in a pre-processment step, but also to simplify the implementation. The
latter also affects the first.
4.1.1 Star rule
The star rule, already mentioned in section 3.2.2, is also used for underes-
timations with k = 2. The reason it is not usually used with more variables
in inference is simple. As it can be observed, the second clause generated after
replacing the original clauses in the inference is not in the CNF form:
(l1 ∨ (li+1 ∨ li+2 ∨ · · · ∨ lk),m)1≤i≤k
It is necessary to expand the negation of the sub-formula, that generates as
many clauses as literals present in the rule. Also, there are that many clauses
as there are l − 1 literals in the rule.
Example 4.1.1 Given the partial Max-SAT instance φ = {(x1 ∨ x2 ∨ x3 ∨
x4, T ), (x1, 1), (x2, 1), (x3, 1), (x4, 1)}. The star rule with k = 4 can be ap-
plied to the φ. The resulting formula φ′ = {(x1 ∨ x2 ∨ x3 ∨ x4, T ), (x1 ∨
(x2 ∨ x3 ∨ x4), 1), (x2 ∨ (x3 ∨ x4), 1), (x3 ∨ (x4), 1), (�, 1)} would have to be fur-
ther expanded for each negated subformula.
It can be concluded that this rule is useful in a limited number of situations,
with k ≤ 2. A higher number would generate a quadratic number of clauses,
with the added complexity of codification.
CHAPTER 4. COMBINING APPROACHES 35
4.1.2 Dominating unit clause
As stated in section 3.2.2, the dominating unit clause is a unit propagation
sound rule for Max-SAT. In this work, we will focus on the unit clauses from
hard clauses. If there are any hard unit clauses, the unit propagation rule will
be applied, removing all the clauses where the literal of the unit clause appears.
Also, all variables with the negated literal will be removed.
4.1.3 Chain resolution
This rule, already presented in section 3.2.2, is an original inference rule
that captures a chain of binary clauses with star clauses and unit clauses. The
usefulness of this rule lies in the fact that it uses soft unit and binary clauses,
something that is difficult to use in most inference rules. Besides, it generates
a empty clause, cutting the search space of the instance.
Although this rule generates a linear number of clauses, this number can be
more than three times the original number of clauses of the inference rule. We
have limited this rule, so that both unit clauses must be soft clauses. If any
unit clause were hard, it could be applied the dominating unit clause rule, that
states that hard unit clauses can be applied as unit propagation. When a hard
unit clause is used as unit propagation, all clauses where its literal appears are
affected, included those in a possible chain resolution. This way chain resolution
could not be applied if a unit clause is hard.
Next, we present a simplified chain resolution rule, with unit clauses limited
to soft clauses.(l1, u1),
(li ∨ li+1, ui+1)1≤i<k,
(lk, uk)
≡
(li ∨ li+1, ui+1 1)1≤i<k,
(li ∨ li+1, 1)1≤i<k,
(�, 1)
,
where ∀1≤i<j≤kli 6= lj and u1 = 1, uk = 1.
This simplification allows us to limit the number of clauses generated with
chain resolution to at most the double of the number of original clauses. If all
clauses are soft, the number of clauses actually decreases.
We previously stated that the star rule it could only be efficiently used up
CHAPTER 4. COMBINING APPROACHES 36
to a limit. Chain resolution will be used instead of neighbour resolution with
unit clauses and a binary clause star rule.
The example from section 3.2.12 already applies this rule.
4.1.4 Cycle resolution
Cycle resolution is another hyper resolution rule from the same authors of
chain resolution. The main objective for using this rule is to derive a new unit
clause so that it can be used with other rules, such as chain resolution.
This rule also generates a linear number of clauses, but it can be up to four
times the original number of clauses and it does not have the same impact on
search space reduction. These factors limit the usefulness of this rule.
We will be using cycle resolution with k ≤ 3 and the first clause in the rule is
soft. These restrictions allow a good trade-off between the number of non-unit
clauses generated and the unit clause gained from this trade.
When k = 2, cycle resolution is equal to neighbour resolution with binary
clauses. We present the full inference rule with 3 clauses that we will be using:
(l1 ∨ l2, u1),
(l2 ∨ l3, u2),
(l1 ∨ l3, u3)
≡
(l2 ∨ l3, u2 1),
(l1 ∨ l2 ∨ l3, 1),
(l1 ∨ l2 ∨ l3, 1),
(l1 ∨ l3, u3 1),
(l1, 1)
,
where ∀1≤i<j≤kli 6= lj and u1 = 1.
5
Testing
In this chapter we define the strategies used in order to verify if pre-processing
partial Max-SAT instances with inference techniques is useful for real world in-
stances in translation based approaches.
First, we will describe the techniques used for pre-processing as well the
configuration used for testing. Next, there is an analysis of the results obtained
from the tests made.
5.1 Testing Setup
The techniques used for pre-processing the partial Max-SAT instances were
already described in section 4.1. We will use abbreviations for each technique
in regard to the pre-processing made to the instances:
none The original instance, without any pre-processing used.
neigh Neighbour resolution, presented in section 3.2.2
unit Dominating unit clause rule, presented in section 4.1.2
cycle Cycle resolution, presented in section 4.1.4
chain Chain resolution, presented in section 4.1.3
37
CHAPTER 5. TESTING 38
all All the rules stated above applied in the pre-processing.
The version of the msuncore solver used is 1.2. The machine setup consists
of Intel R©Xeon R©5160 dual processor at 3.0 GHz with 2GiB for each solver
run.
The instances used for these tests were the same used at the Max-SAT
evaluation 2008[46]:
random Randomly generated instances. 150 total.
crafted Instances crafted of some particular problems. 298 total.
industrial Industrial instances, converted from most real world problems. 1950
total.
Each instance had an upper bound time limit of 1200 seconds, for both
pre-processing and solving of each instance.
5.2 Testing Analysis
First, we will analyse how many instances were able to be pre-processed, i.e.,
how many instances were successfully transformed with the applied inference
rule(s) with the given time.
Then, we compare how many instances were solved with each of the cate-
gories: random, crafted and industrial. For each instance, there is a table that
displays the number of instances that were solved with msuncore within the
given time limit for each inference technique applied in the pre-processing. Also
in the same table is displayed the ratio between, the total number of instances,
the number of instances solved within the time limit and for each category, we
also show a plot with the time needed to solve a sample of instances. Each
subsection describes the sample that tries to maximize the visibility of the dif-
ferences between each rule.
CHAPTER 5. TESTING 39
Pre-processing Nr. processed (out of 2398) Ratio
none 2398 100%neigh 2398 100%unit 848 35.36%cycle 2398 100%chain 2398 100%all 688 28.69%
Table 5.1: Pre-processed instances
5.2.1 Pre-processing
The first step is to pre-process all instances: the pre-processor was used
with each technique for each instance. Because some techniques may have any
impact on the number of unsatisfiable clauses, there is a comment line in each
processed instance with the number of unsatifiable clauses eliminated.
The number of processed instances by each technique is reflected in table
5.1.
For those instances that were unable to be pre-processed, the instance with-
out pre-processing was used instead. Unit inference was the only inference rule
that did not pre-process all the instances. As a result, all the inferences rules
was also not able to pre-process all the instances.
These results make the unit pre-processing and all rules applied less reliable,
as most of the instances will not be pre-processed.
5.2.2 Random instances
As stated previously, translation based solvers do not perform well with
random instances. This is the case of msuncore, that also did not perform well
in this category in the Max-SAT evaluation.
A similar pattern emerged in our tests, msuncore without pre-processing was
only able to solve 27 of the 150 possible instances. The only inference rule that
was able to solve more random instances was chain resolution along side with
all rules applied. The increase was not significant, from 27 solved instances to
28, a jump of 0.67% of more solved instances.
CHAPTER 5. TESTING 40
Random instances Nr. processed (out of 150) Ratio
none 27 18%neigh 27 18%unit 27 18%cycle 27 18%chain 28 18.67%all 28 18.67%
Table 5.2: Solved random instances
These results are reflected in Table 5.2. Also, the time required to solve the
instances was not significant. We can see in Figure 5.1, the time msuncore took
to solve a sample of 30 random instances, ordered by time required to solve for
each inference rule.
Figure 5.1: Time to solve a sample of random instances
5.2.3 Crafted instances
Crafted instances also have an homogeneous structure of unsatisfiable cores.
This makes them difficult to solve by translation based solvers such as msuncore.
As it happened with the random instances, msuncore without pre-processing
did not solve most of the instances, only 49 out of 298 instances . None of the
inference rules helped to increase the number of instances solved.
As it can be seen in Table 5.3, all of the rules applied did solve as many as
none of rules, but this can be explained if we see the number of pre-processed
CHAPTER 5. TESTING 41
Crafted instances Nr. processed (out of 298) Ratio
none 49 16.44%neigh 45 15.10%unit 45 15.10%cycle 45 15.10%chain 46 15.44%all 49 16.44%
Table 5.3: Solved crafted instances
instances when all rules are applied. Only 28.69% were translated, and so, most
of the instances for this rule were the same as none of the rules applied.
Figure 5.2: Time to solve a sample of crafted instances
Figure 5.2, shows the time msuncore spend solving a sample of the fastest
50 crafted instances for each inference rule.
5.2.4 Industrial instances
Similar to what happened in the Max-SAT evaluation, msuncore solved most
of the industrial instances. The heterogeneous structure of the unsatisfiable
cores makes the translation based solvers more efficient than most branch-and-
bound solvers.
Some of the pre-processing steps did help to solve some more instances than
without it, as showed in table 5.4. Without pre-processing, msuncore was able
to solve 1544 of the 1950 industrial instances,i.e., 79.18% of them. Unit and
CHAPTER 5. TESTING 42
Industrial instances Nr. processed (out of 1950) Ratio
none 1544 79.18%neigh 1548 79.38%unit 1538 78.87%cycle 1531 78.51%chain 1555 79.74%all 1540 78.97%
Table 5.4: Solved industrial instances
cycle rules, along side with all rules applied actually decrease the number of
solved instances to 1538, 1531 and 1540, respectively. The neighbour resolution
and the hyper-resolution cycle rule increased the number of instances to 1548
and 1555, respectively.
Figure 5.3, shows the time spend by msuncore on a sample of 300 instances,
between the 1260 th and 1260 th first to be solved instances for each inference
rule.
Figure 5.3: Time to solve a sample of industrial instances
6
Conclusions and Future Work
Max-SAT and their variants are still used to solve real world problems that
can not be solved with ease in their original codification. The technique that
more result has given to solve these problems is translation-based. In our work
we tried to bring some of the techniques used in the well known branch-and-
bound method.
We used techniques adopted from derived work in SAT such as the equiv-
alent unit propagation for Max-SAT. We also used some more recent original
techniques known as hyper resolution from the works of [37]. One of the first
conclusions to be made, is that, for more homogeneous problems, such as ran-
dom and crafted instances, the translation based approach did not improve
much with pre-processing.
As for the crafted instances, the changes were also negligible, being another
set of instances where translation based solvers such as msuncore have more
difficulties. The solver solved at least as many instances with or without the
pre-processor.
At previously, stated the focus was on real world problems, and as such,
it was already expected that we would not improve results in homogeneous
instances such as random or crafted instances.
In the industrial instances there were some improvements. Neighbour and
43
CHAPTER 6. CONCLUSIONS AND FUTURE WORK 44
chain resolution processed instances were more able to be solved. This is a
indication that some resolution rules do help to solve more real world instances,
and so they should be used as a pre-processor for msuncore and maybe other
translation based solvers.
As a post-mortem analysis, the study of the pre-processed instances should
reveal more information based on these results. Only some of the inference rules
were helpful, while other were actually worse than not using them.
As future work, the techniques used for pre-processing that did help in solv-
ing industrial instances, such as neighbour resolution and chain resolution, could
be incorporated inside the translation based approach to trim the solution af-
ter each unsuccessful iteration. Also, more techniques could be used, and a
combination for the more efficient ones could be used as a final version of the
pre-processor. Finally, the pre-processor could be further improved to give in-
termediary results. future versions should incorporate this feature.
Bibliography
[1] Xu, H., Rutenbar, R.A., Sakallah, K.A.: sub-sat: a formulation for relaxed
boolean satisfiability with applications in routing. IEEE Trans. on CAD of
Integrated Circuits and Systems 22 (2003) 814–820
[2] Safarpour, S., Mangassarian, H., Veneris, A.G., Liffiton, M.H., Sakallah,
K.A.: Improved design debugging using maximum satisfiability. In: FM-
CAD, IEEE Computer Society (2007) 13–19
[3] Strickland, D.M., Barnes, E., Sokol, J.S.: Optimal protein structure align-
ment using maximum cliques. Oper. Res. 53 (2005) 389–402
[4] Vasquez, M., Hao, J.K.: A “logic-constrained” knapsack formulation and a
tabu algorithm for the daily photograph scheduling of an earth observation
satellite. Comput. Optim. Appl. 20 (2001) 137–157
[5] Park, J.D.: Using weighted max-sat engines to solve mpe. In: AAAI/IAAI.
(2002) 682–687
[6] Cook, S.A.: The complexity of theorem-proving procedures. In: STOC ’71:
Proceedings of the third annual ACM symposium on Theory of computing,
New York, NY, USA, ACM (1971) 151–158
[7] Turing, A.M.: On computable numbers, with an application to the entschei-
dungsproblem. Proceedings of the London Mathematical Society 42 (1937)
230–265
45
BIBLIOGRAPHY 46
[8] Hopcroft, J.E., Ullman, J.D.: Introduction to Automata Theory, Lan-
guages and Computation. Addison-Wesley (1979)
[9] Garey, M.R., Johnson, D.S.: Computers and Intractability; A Guide to the
Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA
(1990)
[10] Cohen, D.A., Cooper, M.C., Jeavons, P.: A complete characterization of
complexity for boolean constraint optimization problems. In Wallace, M.,
ed.: CP. Volume 3258 of Lecture Notes in Computer Science., Springer
(2004) 212–226
[11] Larrosa, J., Heras, F., de Givry, S.: A logical approach to efficient max-sat
solving. Artif. Intell. 172 (2008) 204–233
[12] Davis, M., Putnam, H.: A computing procedure for quantification theory.
J. ACM 7 (1960) 201–215
[13] Davis, M., Logemann, G., Loveland, D.W.: A machine program for
theorem-proving. Commun. ACM 5 (1962) 394–397
[14] Selman, B., Levesque, H.J., Mitchell, D.G.: A new method for solving hard
satisfiability problems. In: AAAI. (1992) 440–446
[15] Selman, B., Kautz, H.A., Cohen, B.: Noise strategies for improving local
search. In: AAAI. (1994) 337–343
[16] Alsinet, T., Manya, F., Planes, J.: Improved branch and bound algorithms
for max-sat. In: Proceedings of the 6th International Conference on the
Theory and Applications of Satisfiability Testing. (2003)
[17] Alsinet, T., Manya, F., Planes, J.: Improved exact solvers for weighted
max-sat. [47] 371–377
[18] Alsinet, T., Manya, F., Planes, J.: An efficient solver for weighted max-sat.
J. of Global Optimization 41 (2008) 61–73
[19] Li, C.M., Manya, F., Planes, J.: Exploiting unit propagation to compute
lower bounds in branch and bound max-sat solvers. In: CP. (2005)
BIBLIOGRAPHY 47
[20] Li, C.M., Manya, F., Planes, J.: Detecting disjoint inconsistent subformu-
las for computing lower bounds for max-sat. [48]
[21] Li, C.M., Manya, F., Planes, J.: New inference rules for max-sat. J. Artif.
Intell. Res. (JAIR) 30 (2007) 321–359
[22] Pipatsrisawat, K., Darwiche, A.: Clone: Solving weighted max-sat in a re-
duced search space. In Orgun, M.A., Thornton, J., eds.: Australian Confer-
ence on Artificial Intelligence. Volume 4830 of Lecture Notes in Computer
Science., Springer (2007) 223–233
[23] Lin, H., Su, K.: Exploiting inference rules to compute lower bounds for
max-sat solving. In Veloso, M.M., ed.: IJCAI. (2007) 2334–2339
[24] Lin, H., Su, K., Li, C.M.: Within-problem learning for efficient lower
bound computation in max-sat solving. In Fox, D., Gomes, C.P., eds.:
AAAI, AAAI Press (2008) 351–356
[25] Heras, F., Larrosa, J., Oliveras, A.: Minimaxsat: A new weighted max-sat
solver. [49] 41–55
[26] Heras, F., Larrosa, J., Oliveras, A.: Minimaxsat: An efficient weighted
max-sat solver. J. Artif. Intell. Res. (JAIR) 31 (2008) 1–32
[27] Argelich, J., Manya, F.: Partial max-sat solvers with clause learning. [49]
28–40
[28] Ramırez, M., Geffner, H.: Structural relaxations by variable renaming and
their compilation for solving mincostsat. In Bessiere, C., ed.: CP. Volume
4741 of Lecture Notes in Computer Science., Springer (2007) 605–619
[29] Land, A., Doig, A.: An automatic method for solving discrete programming
problems. Econometrica 28 (1960) 497–520
[30] Wallace, R.J., Freuder, E.C.: Comparative studies of constraint satisfac-
tion and davis-putnam algorithms for maximum satisfiability problems. In:
Cliques, Coloring and Satisfiability, American Mathematical Society (1996)
587–615
BIBLIOGRAPHY 48
[31] Shen, H., Zhang, H.: Study of lower bound functions for max-2-sat. In
McGuinness, D.L., Ferguson, G., eds.: AAAI, AAAI Press / The MIT
Press (2004) 185–190
[32] Alsinet, T., Manya, F., Planes, J.: A max-sat solver with lazy data struc-
tures. In Lemaıtre, C., Reyes, C.A., Gonzalez, J.A., eds.: IBERAMIA.
Volume 3315 of Lecture Notes in Computer Science., Springer (2004) 334–
342
[33] Bonet, M.L., Levy, J., Manya, F.: Resolution for max-sat. Artif. Intell.
171 (2007) 606–618
[34] Larrosa, J., Heras, F.: Resolution in max-sat and its relation to local
consistency in weighted csps. In Kaelbling, L.P., Saffiotti, A., eds.: IJCAI,
Professional Book Center (2005) 193–198
[35] Cha, B., Iwama, K.: Adding new clauses for faster local search. In:
AAAI/IAAI, Vol. 1. (1996) 332–337
[36] Bansal, N., Raman, V.: Upper bounds for maxsat: Further improved. In
Aggarwal, A., Rangan, C.P., eds.: ISAAC. Volume 1741 of Lecture Notes
in Computer Science., Springer (1999) 247–258
[37] Heras, F., Larrosa, J.: New inference rules for efficient max-sat solving.
[48]
[38] Niedermeier, R., Rossmanith, P.: New upper bounds for maximum satisfi-
ability. J. Algorithms 36 (2000) 63–88
[39] Liffiton, M.H., Sakallah, K.A.: On finding all minimally unsatisfiable sub-
formulas. [47] 173–186
[40] Berre, D.L.: Sat4j, a satisfability library for java. http://sat4j.org/ (2008)
[41] Fu, Z., Malik, S.: On solving the partial max-sat problem. In Biere,
A., Gomes, C.P., eds.: SAT. Volume 4121 of Lecture Notes in Computer
Science., Springer (2006) 252–265
[42] Marques-Silva, J., Planes, J.: On using unsatisfiability for solving maxi-
mum satisfiability. CoRR abs/0712.1097 (2007)
BIBLIOGRAPHY 49
[43] Een, N., Sorensson, N.: Translating pseudo-boolean constraints into sat.
Journal on Satisfiability, Boolean Modeling and Computation 2 (2006) 1–26
[44] Marques-Silva, J., Manquinho, V.M.: Towards more effective
unsatisfiability-based maximum satisfiability algorithms. In Buning, H.K.,
Zhao, X., eds.: SAT. Volume 4996 of Lecture Notes in Computer Science.,
Springer (2008) 225–230
[45] Marques-Silva, J., Planes, J.: Algorithms for maximum satisfiability using
unsatisfiable cores. In: DATE, IEEE (2008) 408–413
[46] : Third max-sat evaluation. http://www.maxsat.udl.cat/08/ (2008)
[47] Bacchus, F., Walsh, T., eds.: Theory and Applications of Satisfiability
Testing, 8th International Conference, SAT 2005, St. Andrews, UK, June
19-23, 2005, Proceedings. In Bacchus, F., Walsh, T., eds.: SAT. Volume
3569 of Lecture Notes in Computer Science., Springer (2005)
[48] Proceedings, The Twenty-First National Conference on Artificial Intelli-
gence and the Eighteenth Innovative Applications of Artificial Intelligence
Conference, July 16-20, 2006, Boston, Massachusetts, USA. In: AAAI,
AAAI Press (2006)
[49] Marques-Silva, J., Sakallah, K.A., eds.: Theory and Applications of Sat-
isfiability Testing - SAT 2007, 10th International Conference, Lisbon, Por-
tugal, May 28-31, 2007, Proceedings. In Marques-Silva, J., Sakallah, K.A.,
eds.: SAT. Volume 4501 of Lecture Notes in Computer Science., Springer
(2007)