Archive

Archive for May, 2015

Lights off

Looking for another puzzle i found the lights off problem at codechef. Wolfram math world suggests a novel approach for solving it. At the core is a linear equations solver. The LinearEqSolver at hackage provides this functionallity for haskell.

The package LinearEqSolver is actually a wrapper that uses external libraries; Z3, CVC4, and MathSAT. I used CVC4 which I downloaded here and installed.

Equipped with a fine solver like this, all we need to do is to construct the matrix in the equation M x = b. Each row in M is a binary matrix Aij which represents the positions of the board that are flipped if the position i,j is pressed. This is the code that constructs Aij:

indices n = [(i,j) | i <- [0..n-1], j <- [0..n-1]]

makeA n i j = 
    map f (indices n)
  where
    f (i',j') = case (abs(i'-i),abs(j'-j)) of
                  (0,0) -> 1
                  (1,0) -> 1
                  (0,1) -> 1
                  _     -> 0

Using this code we can then construct M:

makeM n = map (-2:) (makeM' n)
  where makeM' n = map (uncurry (makeA n)) (indices n)

Note that this code adds an extra column filled with -2. The reason for this is that we are solving a system of linear equations modulo 2. The transformation from modulo to an ordinary lineary equation problem is given here.

Now we can wrap everything up into a solver function.

solve n b = do
    s <- solve' n b
    case s of
      Just x -> return $ map (\x -> mod x 2) $ tail $ x
      _ -> return []
  where
    solve' n b = solveIntegerLinearEqs CVC4 (makeM n) b

Testing the problem give at wolfram math world

test = solve 3 board
  where board = [0,1,0,1,1,0,0,1,1]

yields [1,1,1,0,0,0,0,0,1] which is also the solution presented at wolfram.

Advertisements
Categories: Uncategorized

Haskell code contemplation

I picked up a programming excercise at some random site. The problem involved finding the m smallest elements in a list. After a bit of fiddling I came up with this one:

import Data.List.Ordered

fsm :: Ord a => Int -> [a] -> [a]
fsm m = foldr update []
  where 
    insert = insertBagBy (flip compare)
    update x minSet@(~(maxElem:minSet'))
      | length minSet < m = insert x minSet
      | x < maxElem       = insert x minSet'
      | otherwise         = minSet

The idea is to keep track of the smallest numbers (up till count m) while folding the list. The data structure used is a simple (but ordered) list. The following operations are used:

  • insert
  • length
  • findMax
  • deleteMax

Considering the efficiency both findMax and deleteMax are O(1). Insert is O(n). That is a bit unfortunate. I was hoping for O(log(n)) but when I look at the implementation of insertBagBy this is clearly not that efficient. However, insert is not a frequent operation for many input cases and if m is small it could still be something we could live with. The length operation is O(n) which is worse since  this operation will be triggered for every element in the list (for any input).

You might wonder why I did not simply use the Data.Set? I started out with that one until I realized it cannot handle duplicate elements. For example finding the two minimum elements in [1,1,2,3]. It should be [1,1] but the set will not handle that correctly so I turned to ordered lists instead.

I though about the costly length operation for a while and came up with another version.

import Data.List.Ordered

fsm :: Ord a => Int -> [a] -> [a]
fsm m xs = foldr update (sort xs_) _xs
  where 
    (xs_,_xs) = splitAt m xs
    rcomp     = flip compare
    sort      = sortBy rcomp
    insert    = insertBagBy rcomp
    update x minSet@(maxElem:minSet')
      | x < maxElem = insert x minSet'
      | otherwise = minSet

Instead of starting with an empty set this version takes the m first elements and uses them as an initial set (of minimum numbers). Thus the length operation can be avoided entirely.

That was cool but we can do even better using the Data.Heap module. It provides an efficient implementation of a heap with all the operations we need. The updated code looks like below.

import Prelude hiding (drop)
import Data.Maybe (fromJust)
import Data.Heap 

fsm :: Ord a => Int -> [a] -> [a]
fsm m = toList . foldr update (empty :: MaxHeap a)
  where 
    findMax   = fromJust . viewHead
    deleteMax = drop 1
    update x minSet
      | size minSet < m    = insert x minSet
      | x < findMax minSet = insert x (deleteMax minSet)
      | otherwise          = minSet

In summary we got:

  • insert, O(log n)
  • length/size, O(1)
  • findMax, O(1)
  • deleteMax, O(log n). This was deduced by looking at the source code. The documentation does not say.

So… indeed the operations are efficient and also the code is easy to read. This implementation has O(n log(m)) worst case complexity which can be compared to the previous ones having O(m ⋅ n).

However, once we have an efficient heap implementation we might consider taking full advantage of it.

import Prelude hiding (take)
import Data.Heap

fsm :: Ord a => Int -> [a] -> [a]
fsm m = take m . foldr insert (empty :: MinHeap a)

In this way we turned it into a one liner. We are still relying only on efficient operations and ends up with a similiar (but slightly worse) time complexity O(n log(n)). A downside is that we are consuming more memory, O(n) instead of O(m) in the previous implementation.

It is a bit of a surprise for me that such a simple programming exercise gives so many options and raises so many questions. Also if compared to the problem of finding the smallest element in a list which does not give that many options at all. A simple implementation is the optimal.

fs :: Ord a => [a] -> a
fs = foldr1 (\x min -> if x < min then x else min)

I guess it is not that uncommon, though, that a small rephrasing of the problem changes the entire game. One example that comes to my mind is that it is efficient to find the shortest path in a graph but the longest path is expensive to calculate (with the known algorithms).

Categories: Uncategorized

Word chains: a unified framework for neighbour (equivalence) testing

In the previous post the anagram test was separate from the others and not expressed in operations/transformations. I worked on this a bit and this is where I ended up

data Operation = Change | InsertXs | InsertYs | DeleteXs | DeleteYs | ReOrder

getf Change   = \(x:xs) (y:ys)   -> ([],xs,ys)
getf InsertXs = \xs     ys@(y:_) -> ([],y:xs,ys)
getf InsertYs = \xs@(x:_) ys     -> ([],xs,x:ys)
getf DeleteXs = \(x:xs) ys       -> ([],xs,ys) 
getf DeleteYs = \xs     (y:ys)   -> ([],xs,ys)
getf ReOrder  = \(x:xs)   ys     ->
                    case mkHead x ys [] of
                        Just ys' -> ([ReOrder],xs,ys')
                        _ -> ([],x:xs,ys)
            where
                mkHead _ []     _  = Nothing
                mkHead h (x:xs) ys 
                    | h == x    = Just (xs ++ ys) 
                    | otherwise = mkHead h xs (x:ys)

unifyable args =
    case args of
        (_, [], [])                -> return True
        (ops, x:xs, y:ys) | x == y -> unifyable (ops,xs,ys)
        (ops, xs, ys)              -> do { op <- ops; unifyable ((getf op) xs ys) }

neighbours xs ys = any id $ unifyable (operations, xs, ys)
    where 
        operations = 
            case length xs - length ys of
                0  -> [Change,ReOrder]
                1  -> [InsertYs,DeleteXs]
                -1 -> [InsertXs,DeleteYs]
                _  -> []

Categories: Uncategorized

Word chains: efficient implementation of the rule set

I continued to rewrite the neighbour test in the previous post. I had two, some what contradictary, goals

  • It should be efficient
  • It should match the definition more closely

This is my current version.

unifyable ops args =
   case args of
     (x:xs, y:ys) | x == y -> unifyable ops (xs,ys)
     ([], []) -> return True
     (xs, ys) -> do { op <- ops; unifyable [] (op xs ys) }

neighbours xs ys =
   case length xs - length ys of
     0  -> eval [change] || anagram xs ys
     1  -> eval [insert_ys,delete_xs]
     -1 -> eval [insert_xs,delete_ys]
     _  -> False
 where
    eval ops = any id $ unifyable ops (xs,ys)
    anagram xs ys = xs /= ys && sort xs == sort ys
    change (x:xs) (y:ys) = (xs,ys)
    insert_xs xs (y:ys) = (y:xs,ys)
    insert_ys (x:xs) ys = (xs,x:ys)
    delete_xs (x:xs) ys = (xs,ys)
    delete_ys xs (y:ys) = (xs,ys)

Except for the anagram test the test now has a rather efficient unification framework which uses replace, insert and delete operations to unify the two words. Not revolutionary but it was fun at least.

Categories: Uncategorized