Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

Tuesday, September 4, 2012

Key/Value in-memory storage for Java

Managing multiple properties in Java may be tiresome, especially if these properties are of different type. What is a property you ask me? Anything that doesn't come in thousands and doesn't get changed thousand times a second. For example a class mapped to a database table with more than ten pairs of get* and set* methods is a good candidate.

Problems start to appear when you need to:

  • Make this class thread safe.
  • Or, make a defensive copy of it due to difficulties with the first point.

Classical way around concurrency problems would be to use an immutable class. But managing immutable classes may be unwieldy, especially in the circumstances I've just described. You would need a huge constructor or a special builder class.

So this is the problem that I tried to solve and I hope you don't hate me after you read how I did it.

The idea was very simple – to use hashmap to store properties, but I didn't know how to make it type safe as values would be of different type. Then I came up with a special kind of key that dictates its value type. Here's an excerpt from PropertyKey.java:

public class PropertyKey<T> {
    private final Class<T> clazz;
    private final String name;

    public PropertyKey(Class<T> valueType, String name) {
        this.clazz = valueType;
        this.name = name;
    }

    public boolean checkType(Object value) {
        if (null == value) {
            return true;
        }
        return this.clazz.isAssignableFrom(value.getClass());
    }
}

Constructing new key looks like that:

PropertyKey<String> ENTITY_NAME = 
        new PropertyKey<String>(String.class, "ENTITY_NAME");

A bit verbose and I'm sorry for that, but what I get in return is a key for a field ENTITY_NAME of type String. Class has a method checkType that returns true if its argument extends T. All this is used by a special implementation of immutable hashmap.

public class PropertyHolder {

    private final ImmutableMap<PropertyKey<?>, ?> storage;

    public <T> T get(PropertyKey<T> key) {
        return (T) storage.get(key);
    }

    /**
     * Adds key/value pair to the state and returns new 
     * {@link PropertyHolder} with this state.
     * 
     * @param key {@link PropertyKey} instance.
     * @param value Value of type specified in {@link PropertyKey}.
     * @return New {@link PropertyHolder} with updated state.
     */
    public <T> PropertyHolder put(PropertyKey<T> key, T value) {
        Preconditions.checkNotNull(key, "PropertyKey cannot be null");
        Preconditions.checkNotNull(value, "Value for key %s is null", 
                key);
        Preconditions.checkArgument(key.checkType(value), 
                "Property \"%s\" was given " 
                + "value of a wrong type \"%s\"", key, value);
        // Creates ImmutableMap.Builder with new key/value pair.
        return new PropertyHolder(filterOutKey(key)
                .put(key, value).build());
    }
}

You can download full source code and complete unit tests from here. In full implementation I require values to be Serializable only because it is needed in my project, feel free to get rid of it.

PropertyHolder is an immutable class with interface very similar to Map. Its methods put and remove return new copy of PropertyHolder with modifications applied to it.

The way I do it is not very optimal as every modification creates full copy of ImmutableMap, one way to solve it would be to use persistent collections. But if you don't put there more than 20 objects and don't update it too often current implementation should do just fine.

Now declaring keys is a little ugly but once done it is very easy to use them:

// Adding new k/v pair
PropertyHolder holder = new PropertyHolder();
holder = holder.put(ENTITY_NAME, "Person");

// Reading value for a key in a typesafe way
String name = holder.get(ENTITY_NAME);

That's it. Please let me know if you find this solution useful or if there's a way to improve it.

Feel free to use it as is or modify it to your needs. I'm publishing full sources and unit tests under Eclipse Public License .

Thursday, March 11, 2010

Lazy graph traversal in Clojure

Clojure is a new lisp-like programming language, that runs on JVM. It has tons of interesting features that are worth exploring. To name a few, concurrency model based on immutable data and software transactional memory, lazy sequences, etc.

I am currently re-reading a book on algorithms, so I decided that it might be a good idea to implement some of these algorithms on Clojure. I have implemented breadth and depth first graph traversals, using lazy-seq paradigm.

Some aspects of the implementation may appear to be slightly inefficient; for example I represent graph as a sequence of pairs, and each time I need to get all the neighbors of a node I traverse the entire sequence. But I did it that way so that I could concentrate on the algorithm at question.

For the experienced reader, if there is anything in my code that you think may be considered as a bad tone, please don't hesitate to comment.

Without further ado, here it is:

(ns algo.traversal)

"
 1--2-6--7
 |   \\ /
 3--4-5
"

(def graph (seq {1 2, 1 3, 2 6, 6 7, 3 4, 4 5, 2 5, 5 7}))

(defn transitions
  "Finds all the transitions going to and from the node.
  Transition to the node are reversed."
  [graph node]
  (map #(if (= node (first %))
          %
          (reverse %))
       (filter #(or (= node (first %)) (= node (second %))) graph)))

(defn breadth-first
  "Returns a lazy sequence of transitions in breadth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat next-trans (transitions graph next-node))))))))]
    (walk #{start} (transitions graph start))))

(defn depth-first
  "Returns a lazy sequence of transitions in depth-first order
  starting from the given node."
  [graph start]
  (let [walk 
        (fn walk [past-nodes trans]
          (let [next-trans (drop-while #(past-nodes (second %)) trans)]
            (when-let [next-node (second (first next-trans))]
              (lazy-seq
               (cons (first next-trans)
                     (walk (conj past-nodes next-node)
                           (concat (transitions graph next-node) next-trans)))))))]
    (walk #{start} (transitions graph start))))

As I mentioned before I define graph as a sequence of pairs, all the transitions are navigable in both directions.

breadth-first and depth-first functions are almost the same except for one tiny part; I will talk about it later.

Now, let's take a closer look at the structure of these functions.

Both of them do two things: define inner function walk and call it. Apparently all juice is in this function. Here's the one from breadth-first:

(fn walk [past-nodes trans]
  (let [next-trans (drop-while #(past-nodes (second %)) trans)]
    (when-let [next-node (second (first next-trans))]
      (lazy-seq
       (cons (first next-trans)
             (walk (conj past-nodes next-node)
                   (concat next-trans (transitions graph next-node))))))))

walk is used to recursively traverse the graph. It is closed around the outer function arguments [graph start], and it has two of its own arguments:

  • past-nodes - hash-set of nodes that have already been traversed and
  • trans - transitions yet to walk.

Next line defines a var next-trans, which is a sequence of transitions. It is guaranteed to start with a transition to a new non-traversed node.

next-trans (drop-while #(past-nodes (second %)) trans)
After that next-node var is assigned the next node id. If next-trans is empty the whole block will return nil.
when-let [next-node (second (first next-trans))]
Following is the call to lazy-seq, this function wraps its arguments in a lazy sequence, which allows Clojure stop the computation until further data is requested. I pass it a sequence with its elements being: the next transition and a recursive call to walk.

The recursive call represents the sole difference between the two algorithms. In the case of breadth-first I build the sequence of transitions to traverse by appending transitions from the current node to the end of the sequence,

(walk (conj past-nodes next-node)
      (concat next-trans (transitions graph next-node)))
when in the depth-first case I do the opposite by appending to the head of the sequence:
(walk (conj past-nodes next-node)
      (concat (transitions graph next-node) next-trans))

Tuesday, December 29, 2009

Rounding doubles in Java

Strangely enough there is no standard way in Java to round double values up to a certain order of 10 or 2. There are some ways to do that, but they seem to be too bulky and, maybe, not fast enough. For example, the one where you have to use BigDecimal, it seems to be too much of an overkill to create a new object only to round a value.

After spending a little bit of time studying methods offered by standard java.lang.Math class I stumbled upon two methods that in combination would allow me to round values up to a power of two.

double Math.scalb(double value) - multiplies given double value by the given power of two,
double Math.iround(double value) - discards everything to the right of the decimal point.

So here's what needs to be done:
  1. Divide the given double value by the given power of two.
  2. Discard the decimal part.
  3. Multiply the result by the given power of two.
But there is a catch that you should know about; resulting value may still have something below given precision point, that's OK. As in fact what this method does is setting granularity up to the given power of two.
Here's the method:
public static double round(double value, int power) {
  // Divide value by 2^power
  double scaled = Math.scalb(value, -power);
  // Discard the fractional part, multiply back by 2^power
  return Math.scalb(Math.rint(scaled), power);
}
This approach is great when you compare Doubles with tolerance and need equal hash codes for equal values.

Don't forget that rounding is done up to a certain power of two:
Powers of two.Values that the fractional part may assume.
-10.5
-20.25, 0.5, 0.75
-30.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875

Wednesday, February 18, 2009

XML persistence with Hibernate and Xstream

Not a long time ago in the project I had been working on, we were writing a lot of blobs to a database. The information persisted was not of high interest, so we didn't even bother breaking the blobs into separate tables. Of course on very rare occasions we needed to see the data but couldn't.

The problem seemed to be hard to solve until I had stumbled across a very neat XML persistence library Xstream. It allows you to persist and restore objects of any type to and from XML without any additional configuration. To learn more about the library I recommend you to skip through their very short but expressive tutorial.

We used Hibernate for persistence, so I needed to teach it how to use Xstream. Thankfully Hibernate is an easily extensible library. I had written a persister that was capable of transparently persisting any object as an array of chars in form of XML.

My solution has a major drawback though, which I haven't fixed, max length of character array is restricted in the DBMS we use (in Oracle it is 4000 chars). So if you need to persist some really long sets of data or just big objects you might have to rewirte the persister to use CBLOB or any other alternative in your database.
package cern.oasis.util.persistence;

import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;

import org.hibernate.Hibernate;
import org.hibernate.HibernateException;
import org.hibernate.usertype.UserType;

import com.thoughtworks.xstream.XStream;

/**
 * Persists any serializable java object as XML string.
 * 
 * @author Ivan Koblik
 */
public class XMLString implements UserType {

  private static final int[] TYPES = { Types.VARCHAR };

  private final static XStream stream = new XStream();
  static {
    //Here you can define aliases of some classes if you want to
    //stream.alias("Impedance", Impedance.class);
  }

  @Override
  public Object assemble(Serializable cached, Object owner) 
      throws HibernateException {
    return null == cached ? null : stream.fromXML((String) cached);
  }

  @Override
  public Serializable disassemble(Object value) throws HibernateException {
    return null == value ? null : stream.toXML(value);
  }

  @Override
  public Object deepCopy(Object value) throws HibernateException {
    return null == value ? null : stream.fromXML(stream.toXML(value));
  }

  @Override
  public boolean equals(Object x, Object y) throws HibernateException {
    if (x == y) {
      return true;
    } else if (x == null || y == null) {
      return false;
    } else {
      return x.equals(y);
    }
  }

  @Override
  public int hashCode(Object x) throws HibernateException {
    return null == x ? 0 : x.hashCode();
  }

  @Override
  public boolean isMutable() {
    return true;
  }

  @Override
  public Object nullSafeGet(ResultSet rs, String[] names, Object owner) 
      throws HibernateException, SQLException {
    String persistedXml = (String) Hibernate.STRING.nullSafeGet(rs, names[0]);
    return null == persistedXml ? null : stream.fromXML(persistedXml);
  }

  @Override
  public void nullSafeSet(PreparedStatement st, Object value, int index) 
      throws HibernateException, SQLException {
    String xmlToPersist = null == value ? null : stream.toXML(value);
    if (null != xmlToPersist && xmlToPersist.length() > 4000) {
      throw new RuntimeException(
          "Can not persist strings longer then 4000 characters:\n" + xmlToPersist);
    }
    Hibernate.STRING.nullSafeSet(st, xmlToPersist, index);
  }

  @Override
  public Object replace(Object original, Object target, Object owner) 
      throws HibernateException {
    return this.deepCopy(original);
  }

  @Override
  public Class returnedClass() {
    return Serializable.class;
  }

  @Override
  public int[] sqlTypes() {
    return TYPES;
  }

}
That's it, the only thing that is left to do is to tell Hibernate to use this persister. Here's an example with annotations:
  
@Entity
public class EntityClass {
  ...
  @AccessType("field")
  @Type(type = "cern.oasis.util.persistence.XMLString")
  public Map<String, Boolean> getValueThatIsPersistentByXStream() {
      return this.map;
  }
}
Hope you find it useful. Comments are more then welcome!

Saturday, November 29, 2008

Tricking the RemoteGroup

Last time I promised you to write of another useful trick that helped me optimize group queries in Atlassian Crowd.

If you've ever used Crowd web interface you would've noticed that, Crowd when querying for a group also retrieves a list of all the group members. It may be too much of an overhead to do it before even displaying group attributes. Fortunately there's a way to outsmart Crowd!

If you take a look at the RemoteDirectory interface, you'll be able to find those three methods:
  1. public List<RemotePrincipal> findAllGroupMembers(final String groupName);
  2. public RemoteGroup findGroupByName(String groupName, boolean onlyFetchDirectMembers);
  3. public RemoteGroup findGroupByName(String groupName);
First one returns list of principals that are members of the group. Second and the third methods return an object representing group with the given name. Crowd uses latter methods when you click on group name in Crowd's web interface, it then displays a page with group description and its attributes. You may even not want to see the list of principals that is accessible through the Members tab.

Problem is that RemoteGroup instances returned by the second and third methods must already be populated with a list of members; otherwise Members tab will be empty. So there seem to be not much of a choice, either waste time querying for principals or have non functional connector.

Let's take a closer look at RemoteGroup, it has an interesting method setDirectory(Directory directory). It appears that Crowd uses this directory if it sees that the object is not fully initialized. Thus we can return an empty RemoteGroup instance with only group name and activity flag set. Later Crowd will do the following:
directory.getImplementation().findAllGroupMembers(...)
if it needs a list of members.

Here's my extension of the Directory class:
class ReferencingDirectory extends Directory {
  private static final long serialVersionUID = 1L;
  /**
   * Reference to the remote directory to avoid object creation.
   */
  private final RemoteDirectory remoteDirectory;

  public ReferencingDirectory(RemoteDirectory remoteDirectory) {
    Utils.checkNull("Remote directory", remoteDirectory);
    this.remoteDirectory = remoteDirectory;
  }

  @Override
  public RemoteDirectory getImplementation() {
    return remoteDirectory;
  }

  @Override
  public String getImplementationClass() {
    return remoteDirectory.getClass().getCanonicalName();
  }
}

I keep an instance of it in my implementation of RemoteDirectory and pass it to RemoteGroup whenever I need it.

That's it! Queries for groups are much faster now, and if really needed members are queried separately.

Monday, November 24, 2008

Atlassian Crowd Custom Directory

Today I chose to share with you my experience with implementing custom directory connector to Atlassian Crowd. There is a rather straight forward interface defined on Atlassian site which is not really hard to implement. What I would like to write about is a small number of tricks that helped me to achieve the desired results.
By the way, if you take a look at the Atlassian documentation you will notice that implementation among others should extend DirectoryEntity which is not completely true, for me it was more than enough to implement the RemoteDirectory interface.

No roles

Although notion of roles existed in my company's directory service still they were Windows specific and completely irrelevant to Atlassian products. I needed simple stub implementations of all the role related methods that would not break Crowd's work flow.

It was a good idea to return an empty list on findRoleMemberships(String principalName) method invocation, as throwing an exception would result in exception each time one would try to get principal's information on Crowd's site. Another method searchRoles(SearchContext context) was also better off returning an empty list instead of throwing an exception.

Legacy data

We had been using JIRA, Confluence, and Bamboo for several years and naturally user and group information hadn't been synchronized with the central directory. Our original intention was to integrate Atlassian tools as smoothly as possible, so for every tool I used a stack of two directories one of which was always a snapshot of the user information at the integration stage. Crowd was clever enough to merge data from both of them and it worked especially well for the users that chose same log-in names as in the company's directory, their group membership from the both directories was merged.
Note: order in which directories are listed actually matters, to use passwords from the central directory I had to put the custom directory first in the list.

Read-only

According to my company's safety rules I had to implement a read-only connector. So I chose to throw the UnsupportedOperationException exception when user attempted to update any information concerning principal, group or role. Crowd behaved very well whenever the exception was thrown.

Modifiable internal directory

I also wanted to allow administrators, from JIRA for example, to modify information in Crowd's internal directory. So my policy was following, throw ObjectNotFoundException exception if user and/or group didn't exist in the central directory, methods concerned were: addPrincipalToGroup, removeGroup, removeGroup, removePrincipalFromGroup, updateGroup, updatePrincipal, updatePrincipalCredential. In the other case methods defaulted to throwing UnsupportedOperationException exception. All this led to the next work flow (updateGroup is taken as an example):
  • Invocation of updateGroup of the custom directory (CD).
  • Group doesn't exist in CD, throw the ObjectNotFoundException.
  • Crowd proceeds to the next directory, which is the Internal Directory (ID).
  • ID processes the request graciously.
To achieve the aforementioned functionality with groups I simply called this.findGroupByName(groupName) and if it didn't throw an exception the method threw the UnsupportedOperationException. To check principal existence it was enough to call this.findPrincipalByName(name).

Here's an example:

public void addPrincipalToGroup(String name, String groupName)
 throws ObjectNotFoundException
{
 this.findGroupByName(unsubscribedGroup);
 throw new UnsupportedOperationException(CANNOT_EDIT_DIRECTORY);
}
In contrary to the given example if the group actually existed in the central directory UnsupportedOperationException was thrown which forced Crowd to stop processing the request. This perfectly corresponded to my wishes.

Conclusion

As the result I managed to keep the existing data in editable state merged with the central directory.
What I have described in this post doesn't go outside of scope of abstract implementation of RemoteDirectory which I called ReadOnlyRemoteDirectory. Someday I will describe another trick that helped me minimize the group retrieval delay.