Skip to content

Logical Shifts

Efficiently arithmetizing rightshifts

Recall that our goal is to come up with efficient evaluation procedures for the multilinear extensions shift-ind~op(I,J,S)\widetilde{\textsf{shift-ind}}_\text{op}(I, J, S), for each op{sll,srl,sra}\text{op} \in \{\mathsf{sll}, \mathsf{srl}, \mathsf{sra}\}. We defined those polynomials during our discussion of shifted witness polynomials.

Our task in this page is going to be to handle opsrl\text{op} \coloneqq \mathsf{srl}. We claim that if we can handle shift-ind~srl\widetilde{\textsf{shift-ind}}_\textsf{srl}, we will get shift-ind~sll\widetilde{\textsf{shift-ind}}_\textsf{sll} thrown in for free. Indeed, for each (i,j,s)B6×B6×B6(i, j, s) \in \mathcal{B}_6 \times \mathcal{B}_6 \times \mathcal{B}_6,

shift-indsll(i,j,s)=shift-indsrl(j,i,s);\begin{equation*}\textsf{shift-ind}_\textsf{sll}(i, j, s) = \textsf{shift-ind}_\textsf{srl}(j, i, s);\end{equation*}

that is, on the cube, these functions just differ by a transposition of their first two arguments. Moreover, this fact carries over to their multilinear extensions.

The Auxiliary Functions

For each k{0,,6}k \in \{0, \ldots , 6\}, we will define two auxiliary functions, both defined on Bk×Bk×Bk\mathcal{B}_k \times \mathcal{B}_k \times \mathcal{B}_k:

sk(i,j,s){1 if {j}=?{i}+{s}0 otherwise;\begin{equation*}\textsf{s}_k(i, j, s) \coloneqq \begin{cases} 1 & \text{ if $\{j \} \stackrel{?}= \{i\} + \{s\}$} \\ 0 & \text{ otherwise} \end{cases};\end{equation*} sk(i,j,s){1 if {j}=?{i}+{s}2k0 otherwise.\begin{equation*}\textsf{s}'_k(i, j, s) \coloneqq \begin{cases} 1 & \text{ if $\{j \} \stackrel{?}= \{i\} + \{s\} - 2^k$} \\ 0 & \text{ otherwise} \end{cases}.\end{equation*}

In each case, we write {u}i=0k12iui\{u\} \coloneqq \sum_{i = 0}^{k - 1} 2^i \cdot u_i for the little-endian bitstring-to-integer function. So sk(i,j,s)\mathsf{s}_k(i, j, s) tracks whether ii, jj, ss, interpreted as integers, are such that {j}=?{i}+{s}\{j \} \stackrel{?}= \{i\} + \{s\}; sk\mathsf{s}'_k does the same thing, but modulo an overflow into the kkth bit position.

Note first that shift-indsrl=s6\textsf{shift-ind}_\mathsf{srl} = \mathsf{s}_6, so shift-ind~srl=s~6\widetilde{\textsf{shift-ind}}_\mathsf{srl} = \widetilde{\mathsf{s}}_6. Thus if we produce an efficient circuit for s~6\widetilde{\mathsf{s}}_6, then that will be enough to achieve shift-ind~srl\widetilde{\textsf{shift-ind}}_\mathsf{srl} (and hence also shift-indsll~\widetilde{\textsf{shift-ind}_\mathsf{sll}}, as we argued above).

Recursive Construction

We first note that sk\mathsf{s}_k and sk\mathsf{s}'_k (without the tildes) have a recursive substructure.

  • Base Case.
    • s0=1\mathsf{s}_0 = 1.
    • s0=0\mathsf{s}'_0 = 0.
  • Inductive Case. For each k{1,,6}k \in \{1, \ldots , 6\} and each input triple (i,j,s)Bk×Bk×Bk(i, j, s) \in \mathcal{B}_k \times \mathcal{B}_k \times \mathcal{B}_k:

    • sk(i,j,s)={sk1(i,j,s) if ik1+sk1=?jk1sk1(i,j,s) if ik1+sk1+1=?jk10 otherwise.\begin{equation*}\textsf{s}_k(i, j, s) = \begin{cases} \mathsf{s}_{k - 1}(i, j, s) & \text{ if $i_{k - 1} + s_{k - 1} \stackrel{?}= j_{k - 1}$} \\ \mathsf{s}'_{k - 1}(i, j, s) & \text{ if $i_{k - 1} + s_{k - 1} + 1 \stackrel{?}= j_{k - 1}$} \\ 0 & \text{ otherwise} \end{cases}.\end{equation*}
    • sk(i,j,s)={sk1(i,j,s) if ik1+sk1=?jk1+2sk1(i,j,s) if ik1+sk1+1=?jk1+20 otherwise.\begin{equation*}\textsf{s}'_k(i, j, s) = \begin{cases} \mathsf{s}_{k - 1}(i, j, s) & \text{ if $i_{k - 1} + s_{k - 1} \stackrel{?}= j_{k - 1} + 2$} \\ \mathsf{s}'_{k - 1}(i, j, s) & \text{ if $i_{k - 1} + s_{k - 1} + 1 \stackrel{?}= j_{k - 1} + 2$} \\ 0 & \text{ otherwise} \end{cases}.\end{equation*}

This is a claim, not a definition (we already defined these functions above). We understand all sum expressions in the cases above as taking place over the integers. This fact is essentially true "by inspection", and comes from [DP25, § 4.3].

The Case Breakdowns

To arithmetize these, we need to break down the possible cases. For example, for which bit-triples (ik1,jk1,sk1)B1×B1×B1(i_{k - 1}, j_{k - 1}, s_{k - 1}) \in \mathcal{B}_1 \times \mathcal{B}_1 \times \mathcal{B}_1 does it hold that ik1+sk1=?jk1i_{k - 1} + s_{k - 1} \stackrel{?}= j_{k - 1}, as integers?

We answer this question now. We abbreviate (i,j,s)(ik1,jk1,sk1)(i, j, s) \coloneqq (i_{k - 1}, j_{k - 1}, s_{k - 1}). For each (i,j,k)B1×B1×B1(i, j, k) \in \mathcal{B}_1 \times \mathcal{B}_1 \times \mathcal{B}_1:

  • i+s=ji + s = j holds (over Z\mathbb{Z}) if and only if (i,j,s){(0,0,0),(0,1,1),(1,1,0)}(i, j, s) \in \{(0, 0, 0), (0, 1, 1), (1, 1, 0)\}.
  • i+s+1=ji + s + 1 = j holds (over Z\mathbb{Z}) if and only if (i,j,s)=(0,1,0)(i, j, s) = (0, 1, 0).
  • i+s=j+2i + s = j + 2 holds (over Z\mathbb{Z}) if and only if (i,j,s)=(1,0,1)(i, j, s) = (1, 0, 1).
  • i+s+1=j+2i + s + 1 = j + 2 holds (over Z\mathbb{Z}) if and only if (i,j,s){(0,0,1),(1,0,0),(1,1,1)}(i, j, s) \in \{(0, 0, 1), (1, 0, 0), (1, 1, 1)\}.

This gives a pathway towards arithmetization. The next step is to write down multilinear polynomials in (I,J,S)(I, J, S) whose restrictions to B3\mathcal{B}_3 respectively "indicate" each of the four subsets above. We know that this is possible, by Lagrange interpolation. As it happens, by being cleverer, we can find expressions for these that use very few multiplications a piece (in fact just 2 in each case, which seems to be optimal).

  • i+s=ji + s = j holds (over Z\mathbb{Z}) if and only if 1+j+s+i(1+s(1+j))=11 + j + s + i \cdot (1 + s \cdot (1 + j)) = 1 (over F2\mathbb{F}_2).
  • i+s+1=ji + s + 1 = j holds (over Z\mathbb{Z}) if and only if (1i)j(1s)=1(1 - i) \cdot j \cdot (1 - s) = 1 (over F2\mathbb{F}_2).
  • i+s=j+2i + s = j + 2 holds (over Z\mathbb{Z}) if and only if i(1j)s=1i \cdot (1 - j) \cdot s = 1 (over F2)\mathbb{F}_2).
  • i+s+1=j+2i + s + 1 = j + 2 holds (over Z\mathbb{Z}) if and only if i+s+j(1+s(1+i))=1i + s + j \cdot (1 + s \cdot (1 + i)) = 1 (over F2\mathbb{F}_2).

As a sanity check, note that since the middle two bullets need to be 1 at just one point, we can get away in those two cases with just a single multilinear Lagrange basis polynomial. The other cases are more complicated, but still can get by with just two multiplications each.

The Arithmetizations

We're now ready to arithmetize s~k\widetilde{\mathsf{s}}_k and s~k\widetilde{\mathsf{s}}'_k. Indeed, our arithmetizations come from the above:

  • Base Case.
    • s~0=1\widetilde{\mathsf{s}}_0 = 1.
    • s~0=0\widetilde{\mathsf{s}}'_0 = 0.
  • Inductive Case. For each k{1,,6}k \in \{1, \ldots , 6\} and each input triple (i,j,s)Bk×Bk×Bk(i, j, s) \in \mathcal{B}_k \times \mathcal{B}_k \times \mathcal{B}_k:

    • s~k(I,J,S)=[1+Jk1+Sk1+Ik1(1+Sk1(1+Jk1))]s~k1(I,J,S)+[(1Ik1)Jk1(1Sk1)]s~k1(I,J,S).\begin{align*}\widetilde{\textsf{s}}_k(I, J, S) &= \left[ 1 + J_{k - 1} + S_{k - 1} + I_{k - 1} \cdot (1 + S_{k - 1} \cdot (1 + J_{k - 1})) \right] \cdot \widetilde{\mathsf{s}}_{k - 1}(I, J, S) \\ &+ \left[ (1 - I_{k - 1}) \cdot J_{k - 1} \cdot (1 - S_{k - 1}) \right] \cdot \widetilde{\mathsf{s}}'_{k - 1}(I, J, S). \end{align*}
    • s~k(I,J,S)=[Ik1(1Jk1)Sk1]s~k1(I,J,S)+[Ik1+Sk1+Jk1(Ik1+Sk1(1+Ik1))]s~k1(I,J,S).\begin{align*}\widetilde{\textsf{s}}'_k(I, J, S) &= \left[ I_{k - 1} \cdot (1 - J_{k - 1}) \cdot S_{k - 1} \right] \cdot \widetilde{\mathsf{s}}_{k - 1}(I, J, S) \\ &+ \left[ I_{k - 1} + S_{k - 1} + J_{k - 1} \cdot (I_{k - 1} + S_{k - 1} \cdot (1 + I_{k - 1})) \right] \cdot \widetilde{\mathsf{s}}'_{k - 1}(I, J, S). \end{align*}

This shows that for any (ri,rj,rs)(r_i, r_j, r_s), we can compute s~6(ri,rj,rs)\widetilde{\mathsf{s}}_6(r_i, r_j, r_s) recursively. We initialize two registers, valued 11 and 00, respectively. Then, we make 6 passes over those two registers, indexed k{1,,6}k \in \{1, \ldots , 6\}. In each pass, we can "advance" the state of the two registers by doing no more than 12 multiplications. The total number of multiplications is 126=7212 \cdot 6 = 72.