Sunday, January 6, 2013

Strongly connected components in Clojure

Recently I had fun implementing strongly connected components algorithm in Clojure. I thought some of the Clojurians might want to see it.

(defn dfs
  "Depth first search. Short form of the method passes through all the 
  nodes of the graph even if it's disconnected .
  (nodes-fn graph) expected to return list of all the nodes in the graph.
  (child-fn graph node) expected to return list of all the nodes linked
   to the given node.
  Returns hash-map where nodes are associated with a pair :idx, :leader.
  :idx stores finishing index of the node traversal (post-order counter)
  :leader first finishing index of the current DFS."
  ([graph nodes-fn child-fn]
     (second
      (reduce ;; Start DFS from each node of the graph
       (fn [[idx result passed :as args] next-node]
         (if (not (passed next-node)) ;; Don't do DFS if node is marked
           (dfs idx idx result passed graph next-node child-fn)
           args))
       [0 {} #{}] ;;Initial index, result, set of passed nodes
       (nodes-fn graph))))
  ([idx leader result passed graph node child-fn]
     (let [[idx result passed]
           (reduce (fn [[idx result passed :as args] child-node]
                     (if (not (passed child-node))
                       (dfs idx leader result passed 
                            graph child-node child-fn)
                       args))
                   [idx result (conj passed node)]
                   (child-fn graph node))]
       [(inc idx)
        (assoc result node {:idx idx :leader leader})
        passed])))

(defn pass-two 
  "Calls DFS making sure that traversal is done in the reverse :idx order."
  [graph result child-fn]
  (let [nodes-fn 
        (constantly (->> result 
                         ;;Sort by :idx in reverse order
                         (sort-by (comp :idx second)) reverse 
                         ;;Return only nodes
                         (map first)))]
    (dfs graph nodes-fn child-fn)))
  
(defn scc 
  "Finds strongly connected components of the given directed graph.
  Returns lists of nodes grouped into SCC.
  (nodes-fn graph) expected to return list of all the nodes in the graph.
  (incoming-fn graph node) expected to return all the nodes with
   transitions towards the given node.
  (outgoing-fn graph node) expected to return all the nodes with
   transitions from the given node."
  [graph nodes-fn incoming-fn outgoing-fn]
  (let [result (dfs graph nodes-fn incoming-fn)
        leaders-idx (pass-two graph result outgoing-fn)]
    (for [scc-group (vals (group-by (comp :leader second) leaders-idx))]
      (for [[node & _] scc-group] node))))

Implementation is quite generic and can be used for different graph implementations. Here's an example using list multimap. I use the same graph as in the course given by Tim Roughgarden on Coursera:

(defn list-multimap 
  "Builds list multimap: {key1 [val1 val2 ...], key2 [val3 val4 ...]}.
   Each call adds value to the list associated with the key."
  [m [k v]]
  (if (m k)
    (update-in m [k] conj v)
    (assoc m k [v])))

(defn reverse-graph 
  "Reverses list multimap based graph, see below."
  [graph]
  (reduce
   list-multimap
   {}
   (for [[key values] graph v values] [v key])))

(def test-graph 
  {6 [9], 2 [8], 4 [7], 3 [6], 8 [5 6], 1 [4], 9 [3 7], 5 [2], 7 [1]})

;Same graph but with letters in case you confuse indices and node names.
;(def test-graph
;  {'a ['g], 'b ['e], 'c ['i], 'd ['a], 
;   'e ['h], 'f ['c 'h], 'g ['d 'i], 'h ['b], 'i ['f]})

(def reverse-test-graph
  (reverse-graph test-graph))

And here's how to use the dfs and scc functions:

(dfs test-graph 
     ;;fn that returns set of nodes
     (constantly (into #{} (flatten (seq test-graph))))
     ;;(get graph node) returns list of related nodes.
     get) 

;{2 {:idx 8, :leader 3},
; 8 {:idx 7, :leader 3},
; 6 {:idx 6, :leader 3},
; 9 {:idx 5, :leader 3},
; 3 {:idx 4, :leader 3},
; 5 {:idx 3, :leader 3},
; 1 {:idx 2, :leader 0},
; 4 {:idx 1, :leader 0},
; 7 {:idx 0, :leader 0}}

(scc test-graph 
     ;;fn that returns set of nodes
     (constantly (into #{} (flatten (seq test-graph))))
     ;;works as incoming-fn using cashed reversed graph
     #(get reverse-test-graph %2) 
     ;;(get graph node) returns list of related nodes
     get)

;((8 5 2) (9 3 6) (1 4 7))

Full source code is here.

Tuesday, September 4, 2012

Key/Value in-memory storage for Java

Managing multiple properties in Java may be tiresome, especially if these properties are of different type. What is a property you ask me? Anything that doesn't come in thousands and doesn't get changed thousand times a second. For example a class mapped to a database table with more than ten pairs of get* and set* methods is a good candidate.

Problems start to appear when you need to:

  • Make this class thread safe.
  • Or, make a defensive copy of it due to difficulties with the first point.

Classical way around concurrency problems would be to use an immutable class. But managing immutable classes may be unwieldy, especially in the circumstances I've just described. You would need a huge constructor or a special builder class.

So this is the problem that I tried to solve and I hope you don't hate me after you read how I did it.

The idea was very simple – to use hashmap to store properties, but I didn't know how to make it type safe as values would be of different type. Then I came up with a special kind of key that dictates its value type. Here's an excerpt from PropertyKey.java:

public class PropertyKey<T> {
    private final Class<T> clazz;
    private final String name;

    public PropertyKey(Class<T> valueType, String name) {
        this.clazz = valueType;
        this.name = name;
    }

    public boolean checkType(Object value) {
        if (null == value) {
            return true;
        }
        return this.clazz.isAssignableFrom(value.getClass());
    }
}

Constructing new key looks like that:

PropertyKey<String> ENTITY_NAME = 
        new PropertyKey<String>(String.class, "ENTITY_NAME");

A bit verbose and I'm sorry for that, but what I get in return is a key for a field ENTITY_NAME of type String. Class has a method checkType that returns true if its argument extends T. All this is used by a special implementation of immutable hashmap.

public class PropertyHolder {

    private final ImmutableMap<PropertyKey<?>, ?> storage;

    public <T> T get(PropertyKey<T> key) {
        return (T) storage.get(key);
    }

    /**
     * Adds key/value pair to the state and returns new 
     * {@link PropertyHolder} with this state.
     * 
     * @param key {@link PropertyKey} instance.
     * @param value Value of type specified in {@link PropertyKey}.
     * @return New {@link PropertyHolder} with updated state.
     */
    public <T> PropertyHolder put(PropertyKey<T> key, T value) {
        Preconditions.checkNotNull(key, "PropertyKey cannot be null");
        Preconditions.checkNotNull(value, "Value for key %s is null", 
                key);
        Preconditions.checkArgument(key.checkType(value), 
                "Property \"%s\" was given " 
                + "value of a wrong type \"%s\"", key, value);
        // Creates ImmutableMap.Builder with new key/value pair.
        return new PropertyHolder(filterOutKey(key)
                .put(key, value).build());
    }
}

You can download full source code and complete unit tests from here. In full implementation I require values to be Serializable only because it is needed in my project, feel free to get rid of it.

PropertyHolder is an immutable class with interface very similar to Map. Its methods put and remove return new copy of PropertyHolder with modifications applied to it.

The way I do it is not very optimal as every modification creates full copy of ImmutableMap, one way to solve it would be to use persistent collections. But if you don't put there more than 20 objects and don't update it too often current implementation should do just fine.

Now declaring keys is a little ugly but once done it is very easy to use them:

// Adding new k/v pair
PropertyHolder holder = new PropertyHolder();
holder = holder.put(ENTITY_NAME, "Person");

// Reading value for a key in a typesafe way
String name = holder.get(ENTITY_NAME);

That's it. Please let me know if you find this solution useful or if there's a way to improve it.

Feel free to use it as is or modify it to your needs. I'm publishing full sources and unit tests under Eclipse Public License .

Monday, August 20, 2012

Use case for mutable objects in Clojure

I'm currently implementing Aho-Corasic dictionary indexing algorithm. For that I have to write an augmented trie, where each node can point to another branch of the trie. Such links are called failure links. For example, let's say a trie contains two strings: "abcd" and "fgabch". Here letters in the second string (starting from "a") will have failure links to corresponding letters of the first word.

Here's my ASCII "art":

a-b-c-d
^_____
      |
f-g-a-b-c-h

a-b-c-d
  ^___
      |
f-g-a-b-c-h

a-b-c-d
    ^___
        |
f-g-a-b-c-h

This is needed to guarantee matching in time proportional to the length of the text. If the text is "fgabcd" then matching algorithm will match all letters in "fgabc" and will fail to match last "d", then it will use failure link fgabc -> abc and continue matching from there. As the result it will report matching word with start and end indices [2,5].

Without failure links algorithm would have to restart matching from "g" finding nothing, then again from "a" where it would succeed. This property of never going back is crucial in complexity analysis and gives so much desired O(n) execution time.

Now, I have tried hard to implement this trie using immutable data structures. Following description from the book I need to first construct simple trie with all the words in it and then traverse it starting from root in breadth-first order to compute failure links.

Problem started to emerge when I realized that failure link may point to its parent node, for example when indexing string "aaa", second letter "a" would have failure link to the first letter "a", its parent. As it is impossible to create cycles using constructor-based initialization, immutable collections initialized only in constructors wouldn't work.

I know about delay primitive and how it can be used to create cycles. Korma library makes a very nice use of it. But in my case it is prohibitively expensive (citation needed). This algorithm may be used to index thousands of strings, it is not practical to create tens of thousands of delay objects.

To be clear, I'm not trying to argue against immutable data structures. It's just that sometimes we have to venture into the realm of mutability to achieve efficient code. And I think transients in Clojure are some sort of proof of that.

Thursday, March 11, 2010

Lazy graph traversal in Clojure

Clojure is a new lisp-like programming language, that runs on JVM. It has tons of interesting features that are worth exploring. To name a few, concurrency model based on immutable data and software transactional memory, lazy sequences, etc.

I am currently re-reading a book on algorithms, so I decided that it might be a good idea to implement some of these algorithms on Clojure. I have implemented breadth and depth first graph traversals, using lazy-seq paradigm.

Some aspects of the implementation may appear to be slightly inefficient; for example I represent graph as a sequence of pairs, and each time I need to get all the neighbors of a node I traverse the entire sequence. But I did it that way so that I could concentrate on the algorithm at question.

For the experienced reader, if there is anything in my code that you think may be considered as a bad tone, please don't hesitate to comment.

Without further ado, here it is:

(ns algo.traversal)

"
 1--2-6--7
 |   \\ /
 3--4-5
"

(def graph (seq {1 2, 1 3, 2 6, 6 7, 3 4, 4 5, 2 5, 5 7}))

(defn transitions
  "Finds all the transitions going to and from the node.
  Transition to the node are reversed."
  [graph node]
  (map #(if (= node (first %))
          %
          (reverse %))
       (filter #(or (= node (first %)) (= node (second %))) graph)))

(defn breadth-first
  "Returns a lazy sequence of transitions in breadth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat next-trans (transitions graph next-node))))))))]
    (walk #{start} (transitions graph start))))

(defn depth-first
  "Returns a lazy sequence of transitions in depth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat (transitions graph next-node) next-trans)))))))]
    (walk #{start} (transitions graph start))))

As I mentioned before I define graph as a sequence of pairs, all the transitions are navigable in both directions.

breadth-first and depth-first functions are almost the same except for one tiny part; I will talk about it later.

Now, let's take a closer look at the structure of these functions.

Both of them do two things: define inner function walk and call it. Apparently all juice is in this function. Here's the one from breadth-first:

(fn walk [past-nodes trans]
  (let [next-trans (drop-while #(past-nodes (second %)) trans)]
    (when-let [next-node (second (first next-trans))]
      (lazy-seq
       (cons (first next-trans)
             (walk (conj past-nodes next-node)
                   (concat next-trans (transitions graph next-node))))))))

walk is used to recursively traverse the graph. It is closed around the outer function arguments [graph start], and it has two of its own arguments:

  • past-nodes - hash-set of nodes that have already been traversed and
  • trans - transitions yet to walk.

Next line defines a var next-trans, which is a sequence of transitions. It is guaranteed to start with a transition to a new non-traversed node.

next-trans (drop-while #(past-nodes (second %)) trans)
After that next-node var is assigned the next node id. If next-trans is empty the whole block will return nil.
when-let [next-node (second (first next-trans))]
Following is the call to lazy-seq, this function wraps its arguments in a lazy sequence, which allows Clojure stop the computation until further data is requested. I pass it a sequence with its elements being: the next transition and a recursive call to walk.

The recursive call represents the sole difference between the two algorithms. In the case of breadth-first I build the sequence of transitions to traverse by appending transitions from the current node to the end of the sequence,

(walk (conj past-nodes next-node)
      (concat next-trans (transitions graph next-node)))
when in the depth-first case I do the opposite by appending to the head of the sequence:
(walk (conj past-nodes next-node)
      (concat (transitions graph next-node) next-trans))

Tuesday, December 29, 2009

Rounding doubles in Java

Strangely enough there is no standard way in Java to round double values up to a certain order of 10 or 2. There are some ways to do that, but they seem to be too bulky and, maybe, not fast enough. For example, the one where you have to use BigDecimal, it seems to be too much of an overkill to create a new object only to round a value.

After spending a little bit of time studying methods offered by standard java.lang.Math class I stumbled upon two methods that in combination would allow me to round values up to a power of two.

double Math.scalb(double value) - multiplies given double value by the given power of two,
double Math.iround(double value) - discards everything to the right of the decimal point.

So here's what needs to be done:
  1. Divide the given double value by the given power of two.
  2. Discard the decimal part.
  3. Multiply the result by the given power of two.
But there is a catch that you should know about; resulting value may still have something below given precision point, that's OK. As in fact what this method does is setting granularity up to the given power of two.
Here's the method:
public static double round(double value, int power) {
  // Divide value by 2^power
  double scaled = Math.scalb(value, -power);
  // Discard the fractional part, multiply back by 2^power
  return Math.scalb(Math.rint(scaled), power);
}
This approach is great when you compare Doubles with tolerance and need equal hash codes for equal values.

Don't forget that rounding is done up to a certain power of two:
Powers of two.Values that the fractional part may assume.
-10.5
-20.25, 0.5, 0.75
-30.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875

Sunday, June 28, 2009

EXIF with Python

Recently I wrote a small script on Python that restores photo file's creation and modification dates to their original values, taken from EXIF.

First of all you need to have Python (make sure to get version 2.6.x, 3.x won't work). You will also have to install some libraries.

Getting the libraries

Win32 Extensions
Linux users can skip this step. "Problem" with Windows is that files on FAT32 or NTFS have creation and modification date, while Python out of the box supports only the latter one. To fix that you'll need to get Win32 Extensions library and install it. If you skip this step, the script will work but it will be able to fix only the modification date, creation date will stay untouched.

pyexif
To parse EXIF headers you will need to get the pyexif library. Just unpack it to the folder where you have python installed.

exif
exif is a linux command line utility. In some respect it is better than the pyexif, as I had some pictures where pyexif failed to retrieve the data. For Linux users you need to check if you have exif installed.

The script
#!/usr/bin/env python

"""A simple utility to restore file creation and modification 
dates back to their original values from EXIF.

This script requires exif module to be installed or the exif  
command line utility to be in the path.

To function correctly under windows this script needs win32file and
win32con modules. Otherwise it will not be able to restore the creation 
date."""

import os, sys, time, re, glob
from datetime import datetime, timedelta

try:
  import win32file, win32con
  __use_win_32 = True
except:
  __use_win_32 = False

__path_to_exif = 'exif'

TEN_MINUTES = timedelta(minutes=10)

__description = """Restores file's creation and modification dates back to the original 
value from EXIF.
usage: exif_date.py [File name mask]"""    

def getExifCreationDate(path):
  """Gets the earliest date from the file's EXIF header, returns time tuple"""
  timeStamp = None
  try:
    import exif
    pf = exif.parse(path)
    originalTime = pf.get('DateTimeOriginal')
    if (originalTime):
      timeStamp = datetime.strptime(originalTime, '%Y:%m:%d %H:%M:%S')
  except:
    pass
  
  #sometimes exif lib failes to retrieve data
  if (not timeStamp):
    response = os.popen(__path_to_exif + ' -x "%s"' % path, 'r')
    lines = response.read()
    matches = re.findall('(.*?)', lines)
    if (len(matches)):
      timeStamp = min(*[datetime.strptime(x, '%Y:%m:%d %H:%M:%S') for x in matches])
  return timeStamp

def getFileDates(path):
  """Returns a dictionary of file creation (ctime), modification (mtime), exif (exif) dates"""
  dates = {}
  dates['exif'] = getExifCreationDate(path)
  dates['mtime'] = datetime.utcfromtimestamp(os.path.getmtime(path))
  dates['ctime'] = datetime.utcfromtimestamp(os.path.getctime(path))
  return dates

def setFileDates(fileName, dates):
  """Sets file modification and creation dates to the specified value"""
  if __use_win_32:
    filehandle = win32file.CreateFile(fileName, win32file.GENERIC_WRITE, 0, None, win32con.OPEN_EXISTING, 0, None)
    win32file.SetFileTime(filehandle, *(dates['exif'],)*3)
    filehandle.close()
  else:
    os.utime(fileName, (time.mktime(dates['exif'].utctimetuple()),)*2)

def fixFileDate(fileName):
  """Reads file's EXIF header, gets the earliest date and sets it to the file"""
  dates = getFileDates(fileName)
  if (dates['exif']):
    cmp_time = lambda x, y: x - y > TEN_MINUTES
    diff = [cmp_time(dates[x], dates['exif']) for x in ('mtime', 'ctime')]
    if(sum(diff)):
      setFileDates(fileName, dates)
    return dates, diff
  else:
    return dates, None

def usage():
  print __description

def main(args):
  if (not len(args)):
    usage()
    return - 1
  processedFiles = []
  for fileNameMask in args:
    if "*" in fileNameMask or "?" in fileNameMask:
      print "Looking for files with mask " + fileNameMask
    for fileName in filter(lambda x: x not in processedFiles, glob.glob(fileNameMask)):
      processedFiles.append(fileName)
      try:
        dates, diff = fixFileDate(fileName)
      except Exception, e:
        print e
        diff = None
      print fileName + ' - ',
      if (not diff):
        print 'SKIP, NO EXIF'
      else:
        if (sum(diff) != 0):
            print 'SET TO "%s" (updated M:%d, C:%d)' % (dates['exif'].strftime('%Y:%m:%d %H:%M:%S'), diff[0], diff[1])
        else:
          print 'OK'
  return 0

if __name__ == '__main__':
  sys.exit(main(sys.argv[1:]))
Click here or here to download the script.

Edit: removed OptionParser from imports.
Edit 2: improved handling of daylight savings time.

Wednesday, February 18, 2009

XML persistence with Hibernate and Xstream

Not a long time ago in the project I had been working on, we were writing a lot of blobs to a database. The information persisted was not of high interest, so we didn't even bother breaking the blobs into separate tables. Of course on very rare occasions we needed to see the data but couldn't.

The problem seemed to be hard to solve until I had stumbled across a very neat XML persistence library Xstream. It allows you to persist and restore objects of any type to and from XML without any additional configuration. To learn more about the library I recommend you to skip through their very short but expressive tutorial.

We used Hibernate for persistence, so I needed to teach it how to use Xstream. Thankfully Hibernate is an easily extensible library. I had written a persister that was capable of transparently persisting any object as an array of chars in form of XML.

My solution has a major drawback though, which I haven't fixed, max length of character array is restricted in the DBMS we use (in Oracle it is 4000 chars). So if you need to persist some really long sets of data or just big objects you might have to rewirte the persister to use CBLOB or any other alternative in your database.
package cern.oasis.util.persistence;

import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;

import org.hibernate.Hibernate;
import org.hibernate.HibernateException;
import org.hibernate.usertype.UserType;

import com.thoughtworks.xstream.XStream;

/**
 * Persists any serializable java object as XML string.
 * 
 * @author Ivan Koblik
 */
public class XMLString implements UserType {

  private static final int[] TYPES = { Types.VARCHAR };

  private final static XStream stream = new XStream();
  static {
    //Here you can define aliases of some classes if you want to
    //stream.alias("Impedance", Impedance.class);
  }

  @Override
  public Object assemble(Serializable cached, Object owner) 
      throws HibernateException {
    return null == cached ? null : stream.fromXML((String) cached);
  }

  @Override
  public Serializable disassemble(Object value) throws HibernateException {
    return null == value ? null : stream.toXML(value);
  }

  @Override
  public Object deepCopy(Object value) throws HibernateException {
    return null == value ? null : stream.fromXML(stream.toXML(value));
  }

  @Override
  public boolean equals(Object x, Object y) throws HibernateException {
    if (x == y) {
      return true;
    } else if (x == null || y == null) {
      return false;
    } else {
      return x.equals(y);
    }
  }

  @Override
  public int hashCode(Object x) throws HibernateException {
    return null == x ? 0 : x.hashCode();
  }

  @Override
  public boolean isMutable() {
    return true;
  }

  @Override
  public Object nullSafeGet(ResultSet rs, String[] names, Object owner) 
      throws HibernateException, SQLException {
    String persistedXml = (String) Hibernate.STRING.nullSafeGet(rs, names[0]);
    return null == persistedXml ? null : stream.fromXML(persistedXml);
  }

  @Override
  public void nullSafeSet(PreparedStatement st, Object value, int index) 
      throws HibernateException, SQLException {
    String xmlToPersist = null == value ? null : stream.toXML(value);
    if (null != xmlToPersist && xmlToPersist.length() > 4000) {
      throw new RuntimeException(
          "Can not persist strings longer then 4000 characters:\n" + xmlToPersist);
    }
    Hibernate.STRING.nullSafeSet(st, xmlToPersist, index);
  }

  @Override
  public Object replace(Object original, Object target, Object owner) 
      throws HibernateException {
    return this.deepCopy(original);
  }

  @Override
  public Class returnedClass() {
    return Serializable.class;
  }

  @Override
  public int[] sqlTypes() {
    return TYPES;
  }

}
That's it, the only thing that is left to do is to tell Hibernate to use this persister. Here's an example with annotations:
  
@Entity
public class EntityClass {
  ...
  @AccessType("field")
  @Type(type = "cern.oasis.util.persistence.XMLString")
  public Map<String, Boolean> getValueThatIsPersistentByXStream() {
      return this.map;
  }
}
Hope you find it useful. Comments are more then welcome!