Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Tuesday, September 4, 2012

Key/Value in-memory storage for Java

Managing multiple properties in Java may be tiresome, especially if these properties are of different type. What is a property you ask me? Anything that doesn't come in thousands and doesn't get changed thousand times a second. For example a class mapped to a database table with more than ten pairs of get* and set* methods is a good candidate.

Problems start to appear when you need to:

  • Make this class thread safe.
  • Or, make a defensive copy of it due to difficulties with the first point.

Classical way around concurrency problems would be to use an immutable class. But managing immutable classes may be unwieldy, especially in the circumstances I've just described. You would need a huge constructor or a special builder class.

So this is the problem that I tried to solve and I hope you don't hate me after you read how I did it.

The idea was very simple – to use hashmap to store properties, but I didn't know how to make it type safe as values would be of different type. Then I came up with a special kind of key that dictates its value type. Here's an excerpt from PropertyKey.java:

public class PropertyKey<T> {
    private final Class<T> clazz;
    private final String name;

    public PropertyKey(Class<T> valueType, String name) {
        this.clazz = valueType;
        this.name = name;
    }

    public boolean checkType(Object value) {
        if (null == value) {
            return true;
        }
        return this.clazz.isAssignableFrom(value.getClass());
    }
}

Constructing new key looks like that:

PropertyKey<String> ENTITY_NAME = 
        new PropertyKey<String>(String.class, "ENTITY_NAME");

A bit verbose and I'm sorry for that, but what I get in return is a key for a field ENTITY_NAME of type String. Class has a method checkType that returns true if its argument extends T. All this is used by a special implementation of immutable hashmap.

public class PropertyHolder {

    private final ImmutableMap<PropertyKey<?>, ?> storage;

    public <T> T get(PropertyKey<T> key) {
        return (T) storage.get(key);
    }

    /**
     * Adds key/value pair to the state and returns new 
     * {@link PropertyHolder} with this state.
     * 
     * @param key {@link PropertyKey} instance.
     * @param value Value of type specified in {@link PropertyKey}.
     * @return New {@link PropertyHolder} with updated state.
     */
    public <T> PropertyHolder put(PropertyKey<T> key, T value) {
        Preconditions.checkNotNull(key, "PropertyKey cannot be null");
        Preconditions.checkNotNull(value, "Value for key %s is null", 
                key);
        Preconditions.checkArgument(key.checkType(value), 
                "Property \"%s\" was given " 
                + "value of a wrong type \"%s\"", key, value);
        // Creates ImmutableMap.Builder with new key/value pair.
        return new PropertyHolder(filterOutKey(key)
                .put(key, value).build());
    }
}

You can download full source code and complete unit tests from here. In full implementation I require values to be Serializable only because it is needed in my project, feel free to get rid of it.

PropertyHolder is an immutable class with interface very similar to Map. Its methods put and remove return new copy of PropertyHolder with modifications applied to it.

The way I do it is not very optimal as every modification creates full copy of ImmutableMap, one way to solve it would be to use persistent collections. But if you don't put there more than 20 objects and don't update it too often current implementation should do just fine.

Now declaring keys is a little ugly but once done it is very easy to use them:

// Adding new k/v pair
PropertyHolder holder = new PropertyHolder();
holder = holder.put(ENTITY_NAME, "Person");

// Reading value for a key in a typesafe way
String name = holder.get(ENTITY_NAME);

That's it. Please let me know if you find this solution useful or if there's a way to improve it.

Feel free to use it as is or modify it to your needs. I'm publishing full sources and unit tests under Eclipse Public License .

Monday, August 20, 2012

Use case for mutable objects in Clojure

I'm currently implementing Aho-Corasic dictionary indexing algorithm. For that I have to write an augmented trie, where each node can point to another branch of the trie. Such links are called failure links. For example, let's say a trie contains two strings: "abcd" and "fgabch". Here letters in the second string (starting from "a") will have failure links to corresponding letters of the first word.

Here's my ASCII "art":

a-b-c-d
^_____
      |
f-g-a-b-c-h

a-b-c-d
  ^___
      |
f-g-a-b-c-h

a-b-c-d
    ^___
        |
f-g-a-b-c-h

This is needed to guarantee matching in time proportional to the length of the text. If the text is "fgabcd" then matching algorithm will match all letters in "fgabc" and will fail to match last "d", then it will use failure link fgabc -> abc and continue matching from there. As the result it will report matching word with start and end indices [2,5].

Without failure links algorithm would have to restart matching from "g" finding nothing, then again from "a" where it would succeed. This property of never going back is crucial in complexity analysis and gives so much desired O(n) execution time.

Now, I have tried hard to implement this trie using immutable data structures. Following description from the book I need to first construct simple trie with all the words in it and then traverse it starting from root in breadth-first order to compute failure links.

Problem started to emerge when I realized that failure link may point to its parent node, for example when indexing string "aaa", second letter "a" would have failure link to the first letter "a", its parent. As it is impossible to create cycles using constructor-based initialization, immutable collections initialized only in constructors wouldn't work.

I know about delay primitive and how it can be used to create cycles. Korma library makes a very nice use of it. But in my case it is prohibitively expensive (citation needed). This algorithm may be used to index thousands of strings, it is not practical to create tens of thousands of delay objects.

To be clear, I'm not trying to argue against immutable data structures. It's just that sometimes we have to venture into the realm of mutability to achieve efficient code. And I think transients in Clojure are some sort of proof of that.

Thursday, March 11, 2010

Lazy graph traversal in Clojure

Clojure is a new lisp-like programming language, that runs on JVM. It has tons of interesting features that are worth exploring. To name a few, concurrency model based on immutable data and software transactional memory, lazy sequences, etc.

I am currently re-reading a book on algorithms, so I decided that it might be a good idea to implement some of these algorithms on Clojure. I have implemented breadth and depth first graph traversals, using lazy-seq paradigm.

Some aspects of the implementation may appear to be slightly inefficient; for example I represent graph as a sequence of pairs, and each time I need to get all the neighbors of a node I traverse the entire sequence. But I did it that way so that I could concentrate on the algorithm at question.

For the experienced reader, if there is anything in my code that you think may be considered as a bad tone, please don't hesitate to comment.

Without further ado, here it is:

(ns algo.traversal)

"
 1--2-6--7
 |   \\ /
 3--4-5
"

(def graph (seq {1 2, 1 3, 2 6, 6 7, 3 4, 4 5, 2 5, 5 7}))

(defn transitions
  "Finds all the transitions going to and from the node.
  Transition to the node are reversed."
  [graph node]
  (map #(if (= node (first %))
          %
          (reverse %))
       (filter #(or (= node (first %)) (= node (second %))) graph)))

(defn breadth-first
  "Returns a lazy sequence of transitions in breadth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat next-trans (transitions graph next-node))))))))]
    (walk #{start} (transitions graph start))))

(defn depth-first
  "Returns a lazy sequence of transitions in depth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat (transitions graph next-node) next-trans)))))))]
    (walk #{start} (transitions graph start))))

As I mentioned before I define graph as a sequence of pairs, all the transitions are navigable in both directions.

breadth-first and depth-first functions are almost the same except for one tiny part; I will talk about it later.

Now, let's take a closer look at the structure of these functions.

Both of them do two things: define inner function walk and call it. Apparently all juice is in this function. Here's the one from breadth-first:

(fn walk [past-nodes trans]
  (let [next-trans (drop-while #(past-nodes (second %)) trans)]
    (when-let [next-node (second (first next-trans))]
      (lazy-seq
       (cons (first next-trans)
             (walk (conj past-nodes next-node)
                   (concat next-trans (transitions graph next-node))))))))

walk is used to recursively traverse the graph. It is closed around the outer function arguments [graph start], and it has two of its own arguments:

  • past-nodes - hash-set of nodes that have already been traversed and
  • trans - transitions yet to walk.

Next line defines a var next-trans, which is a sequence of transitions. It is guaranteed to start with a transition to a new non-traversed node.

next-trans (drop-while #(past-nodes (second %)) trans)
After that next-node var is assigned the next node id. If next-trans is empty the whole block will return nil.
when-let [next-node (second (first next-trans))]
Following is the call to lazy-seq, this function wraps its arguments in a lazy sequence, which allows Clojure stop the computation until further data is requested. I pass it a sequence with its elements being: the next transition and a recursive call to walk.

The recursive call represents the sole difference between the two algorithms. In the case of breadth-first I build the sequence of transitions to traverse by appending transitions from the current node to the end of the sequence,

(walk (conj past-nodes next-node)
      (concat next-trans (transitions graph next-node)))
when in the depth-first case I do the opposite by appending to the head of the sequence:
(walk (conj past-nodes next-node)
      (concat (transitions graph next-node) next-trans))

Tuesday, December 29, 2009

Rounding doubles in Java

Strangely enough there is no standard way in Java to round double values up to a certain order of 10 or 2. There are some ways to do that, but they seem to be too bulky and, maybe, not fast enough. For example, the one where you have to use BigDecimal, it seems to be too much of an overkill to create a new object only to round a value.

After spending a little bit of time studying methods offered by standard java.lang.Math class I stumbled upon two methods that in combination would allow me to round values up to a power of two.

double Math.scalb(double value) - multiplies given double value by the given power of two,
double Math.iround(double value) - discards everything to the right of the decimal point.

So here's what needs to be done:
  1. Divide the given double value by the given power of two.
  2. Discard the decimal part.
  3. Multiply the result by the given power of two.
But there is a catch that you should know about; resulting value may still have something below given precision point, that's OK. As in fact what this method does is setting granularity up to the given power of two.
Here's the method:
public static double round(double value, int power) {
  // Divide value by 2^power
  double scaled = Math.scalb(value, -power);
  // Discard the fractional part, multiply back by 2^power
  return Math.scalb(Math.rint(scaled), power);
}
This approach is great when you compare Doubles with tolerance and need equal hash codes for equal values.

Don't forget that rounding is done up to a certain power of two:
Powers of two.Values that the fractional part may assume.
-10.5
-20.25, 0.5, 0.75
-30.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875

Wednesday, February 18, 2009

XML persistence with Hibernate and Xstream

Not a long time ago in the project I had been working on, we were writing a lot of blobs to a database. The information persisted was not of high interest, so we didn't even bother breaking the blobs into separate tables. Of course on very rare occasions we needed to see the data but couldn't.

The problem seemed to be hard to solve until I had stumbled across a very neat XML persistence library Xstream. It allows you to persist and restore objects of any type to and from XML without any additional configuration. To learn more about the library I recommend you to skip through their very short but expressive tutorial.

We used Hibernate for persistence, so I needed to teach it how to use Xstream. Thankfully Hibernate is an easily extensible library. I had written a persister that was capable of transparently persisting any object as an array of chars in form of XML.

My solution has a major drawback though, which I haven't fixed, max length of character array is restricted in the DBMS we use (in Oracle it is 4000 chars). So if you need to persist some really long sets of data or just big objects you might have to rewirte the persister to use CBLOB or any other alternative in your database.
package cern.oasis.util.persistence;

import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;

import org.hibernate.Hibernate;
import org.hibernate.HibernateException;
import org.hibernate.usertype.UserType;

import com.thoughtworks.xstream.XStream;

/**
 * Persists any serializable java object as XML string.
 * 
 * @author Ivan Koblik
 */
public class XMLString implements UserType {

  private static final int[] TYPES = { Types.VARCHAR };

  private final static XStream stream = new XStream();
  static {
    //Here you can define aliases of some classes if you want to
    //stream.alias("Impedance", Impedance.class);
  }

  @Override
  public Object assemble(Serializable cached, Object owner) 
      throws HibernateException {
    return null == cached ? null : stream.fromXML((String) cached);
  }

  @Override
  public Serializable disassemble(Object value) throws HibernateException {
    return null == value ? null : stream.toXML(value);
  }

  @Override
  public Object deepCopy(Object value) throws HibernateException {
    return null == value ? null : stream.fromXML(stream.toXML(value));
  }

  @Override
  public boolean equals(Object x, Object y) throws HibernateException {
    if (x == y) {
      return true;
    } else if (x == null || y == null) {
      return false;
    } else {
      return x.equals(y);
    }
  }

  @Override
  public int hashCode(Object x) throws HibernateException {
    return null == x ? 0 : x.hashCode();
  }

  @Override
  public boolean isMutable() {
    return true;
  }

  @Override
  public Object nullSafeGet(ResultSet rs, String[] names, Object owner) 
      throws HibernateException, SQLException {
    String persistedXml = (String) Hibernate.STRING.nullSafeGet(rs, names[0]);
    return null == persistedXml ? null : stream.fromXML(persistedXml);
  }

  @Override
  public void nullSafeSet(PreparedStatement st, Object value, int index) 
      throws HibernateException, SQLException {
    String xmlToPersist = null == value ? null : stream.toXML(value);
    if (null != xmlToPersist && xmlToPersist.length() > 4000) {
      throw new RuntimeException(
          "Can not persist strings longer then 4000 characters:\n" + xmlToPersist);
    }
    Hibernate.STRING.nullSafeSet(st, xmlToPersist, index);
  }

  @Override
  public Object replace(Object original, Object target, Object owner) 
      throws HibernateException {
    return this.deepCopy(original);
  }

  @Override
  public Class returnedClass() {
    return Serializable.class;
  }

  @Override
  public int[] sqlTypes() {
    return TYPES;
  }

}
That's it, the only thing that is left to do is to tell Hibernate to use this persister. Here's an example with annotations:
  
@Entity
public class EntityClass {
  ...
  @AccessType("field")
  @Type(type = "cern.oasis.util.persistence.XMLString")
  public Map<String, Boolean> getValueThatIsPersistentByXStream() {
      return this.map;
  }
}
Hope you find it useful. Comments are more then welcome!