Java

London Java Community Open Conference 2015

Yesterday, I saved up enough pennies to go to the annual LJC Open Conference at the IBM building, right next to the National Theatre on London’s South Bank, and done in the same brutalist style.

Open Conferences, sometimes called Un-conferences, are where there is no set schedule, other than perhaps, as here, a key-note to start. It’s left to the participants to organise the presentations and workshops.

The Conference Key-note: 20 Years of Java

Starting off with the key note by Simon Ritter and Steve Elliot, both former employees of Sun Micro-Systems (Steve still works for Oracle, Simon does not). This covered Java since it’s launch in 1995, showing how the language has evolved from the 200 odd classes in version 1 to the 4000+ in the latest version, 8. They commented that this bloated situation would improve with the introduction of “Jigsaw”, the modularisation of the JDK.

Performance by the Numbers with Simon Maple

The first presentation I attended concerned a survey conducted by Simon and his company to see how much performance testing was done by the profession and what effect it had. Unsurprisingly, few tested regularly and most tested in reaction to performance bottlenecks when they were found in production. Most performance bugs were connected to databases as much, if not slightly more, than the applications themselves: JPA and Hibernate being the worse culprits in this regard. The lesson here being performance test early, often and with the right tools.

Microservices in the Real World with Chris Batey

Given the popularity at the moment, and linked, to some extent, with the previous talk, Chris took us through his experiences with microservices, especially dealing with outages and his efforts to make the applications more robust (he recommended Netfix’s Hysterix). His slides of the talk can be found here.

Lightning Talks and Lunch

Both before and after lunch there were lightning talks, five minutes where anyone could talk about almost anything. One involved the use of Microsoft’s Kinetix to create ghostly 3D images, like a cut-price 3D scanner. Another mentioned a book called “Godel, Escher, Bach: An Eternal Golden Braid” by Douglas Hofstadter, with a view to writing every algorithm that ever could be written.

JDK8 Lambdas and Streams: Beyond the Basics with Simon Ritter

The first proper presentation after lunch was regarding the new features of JDK 8. Simon gave us a few examples of how he has changed his style of programming from the procedural style to the new functional style using lambdas and, especially, streams. In particular, he showed how using the Supplier interface could allow lazy evaluation, something normally only available in interpreted or functional languages such Scala, and how to design and use accumulators for use with Streams.

IoT 4 Java with Carl Jokl

Having been to the presentation earlier in the week on the BBC micro:bit, I was interested to see what the LJC was doing about this. Unfortunately, it’s all at the very early stages at the moment and Carl didn’t have much to show other than a few concepts: it would have been nice to see some actual things.

Generics: Past, Present and Future with Richard Warburton

Lastly, Richard gave a short presentation on the use of Generics and how they might evolve with future versions. Generics are Java’s way of implementing meta-classes, classes built around other classes, Collections being a usual example. Java 7 introduced intersection types to generics, which can make things even more confusing, so you get code like this:

private static <I extends DataInput & Closeable> PersonWithGenerics read(I source)

Allowing any object that implements both DataInput and Closable in it’s hierarchy of classes and interfaces to be used. It’s a bit like a multiple inheritance workaround.

Richard also showed us was an actual use of the “super” keyword in Generics, something you don’t often get. I won’t go into it here, as it was somewhat involved, but he put the code for the session on Github.

He also mentioned some ideas for future versions, including Generics for primitive types (not just the objects of types, as at the moment).

Conclusion

Was it worth the £25? Yes, I think so, and 200 other people thought so too. The presentations were of reasonable quality, what I saw, and better than you would expect. Perhaps because it’s less contrived than a more expensive conference.

London Java Community, Docklands – Swagger and Hazlecast

Tonight I went along to see two presentations hosted by the London Java Community Docklands branch at Credit Suisse, all wood panelling and soft carpets in the lobby:

First up was David Garry, whose soft Irish tones introduced us to Swagger, an open source REST API definition language and tool suite. You can either reverse engineer you existing API into Swagger or create a new one and generate code from that. The web site has a rather natty online editor:

which will allow you to roll your own definition as well as creating the source code for both server and client. Shown below is some the code produced for SpringMVC:

@Controller
@RequestMapping(value = "/addresses", produces = {APPLICATION_JSON_VALUE})
@Api(value = "/addresses", description = "the addresses API")
@javax.annotation.Generated(value = "class io.swagger.codegen.languages.SpringMVCServerCodegen", date = "2015-10-13T22:20:28.432Z")
public class AddressesApi {

  @ApiOperation(value = "Addresses", 
                notes = "A collection of addresses.", 
				response = Address.class, 
				responseContainer = "List")
  @ApiResponses(value = { 
    @ApiResponse(code = 200, message = "An array of addresses"),
    @ApiResponse(code = 200, message = "Unexpected error") })
  @RequestMapping(value = "", 
                  method = RequestMethod.POST)
  public ResponseEntity<List<Address>> addressesPost(@ApiParam(value = "Latitude component of location.", 
		required = true) 
		@RequestParam(value = "latitude", required = true) Double latitude

It’s a mess of annotations, as you can see, but I suppose if you’ve got a couple of hundred of these definitions, and you have to keep them all documented, Swagger can come in handy.

Next was David Brimley who introduced us to Hazelcast. A few months ago, Pakt had a free book offer and one I downloaded was on an introduction to the subject:

It’s a kind of middleware for data services, styled as an “In-memory Data Grid”, implying caching of data. It does implement the JCache standard (JSR107), but it also includes a messaging and event API, allowing you to be notified of changes in data; clustering and scaling of data “nodes”; and remote code execution (imagine Java stored procedures… sort of). David gave us a very thorough overview, given the limited time available to him.

Spring Boot and Spring Data

Continuing on from the previous article, I finally managed to get to Spring Data. In a way, this is just Spring JPA and looks like this:

@Configuration
@EnableAutoConfiguration
public class Application {

  public static void main(String[] args) {

    ConfigurableApplicationContext context = SpringApplication.run(Application.class);
    PersonRepository repository = context.getBean(PersonRepository.class);

    for (Person person : repository.findAll()) {
      System.out.println(person);
    }
    context.close();
  }
}

Although this looks similar to the Hibernate JPA previously, there are some significant changes. The first is the use of a Repository interface, PersonRepository. This has to descend from the Spring Data Repository interface, usually via another interface:

public interface PersonRepository extends CrudRepository<Person, Long> {
}

I’ve used CrudRepository, the most basic, and given it the class name of the POJO and the type of the unique identifier, which is a Long. Notice that you have to use the class equivalent of the primitive type. You don’t need to define anything else if you don’t want to; CrudRepository defines the basic functions as standard:

public interface CrudRepository<T extends Object, ID extends Serializable> extends Repository<T, ID> {
  public <S extends T> S save(S s);
  public <S extends T> Iterable<S> save(Iterable<S> itrbl);
  public T findOne(ID id);
  public boolean exists(ID id);
  public Iterable<T> findAll();
  public Iterable<T> findAll(Iterable<ID> itrbl);
  public long count();
  public void delete(ID id);
  public void delete(T t);
  public void delete(Iterable<? extends T> itrbl);
  public void deleteAll();
}

As before, the field mappings have to be put directly in the POJO using annotations. Something to remember is that with Spring JPA you have to make sure the column names are lower case.

Next you have to configure the connection properties. This goes in a file called “application.properties”, the default:

spring.datasource.url=jdbc:jtds:sqlserver://localhost:1433/A_DB
spring.datasource.username=aaron
spring.datasource.password=********
spring.datasource.driverClassName=net.sourceforge.jtds.jdbc.Driver

You may have noticed in the application definition is the use of the EnableAutoConfiguration annotation. This is from Spring Boot, which is our next subject.

Spring Boot

So far, behind the scenes, so to speak, we’ve been adding dependencies into the Maven POM file with some abandon, but, to get Spring Boot working, we have to remove them. Why use Spring Boot? I can’t really give you a straight answer, other than to say there are a lot fewer problems when using Spring Data. Take my word for it.

As described in the starting guide instruction on the Spring Boot web site, you have to add the following:

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>1.1.8.RELEASE</version>
</parent>

… which goes in the project root. Next you want something to tell Spring Boot to pull in the Data JPA layer:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

… which goes in the dependencies. Clear everything else out of the dependencies: this will cause all kinds of errors in the code, but resist the temptation to fix this by pulling in the dependencies. You will regret it if you don’t. Do a build (or clean and build), you’ll find that new dependencies have been pulled in via the parent and you don’t have to define anything. This also sorts out any import errors.

The dependencies that do have to be put in are specific to this particular project and that is the JTDS driver for MS SQL Server:

<dependency>
  <groupId>net.sourceforge.jtds</groupId>
  <artifactId>jtds</artifactId>
  <version>1.2</version>
</dependency>

You will be surprised to find that all this works straight away with no other configuration.

Spring Boot Web Services

So far, we’ve just been playing with the data, outputting the results to the console. To get it to do something useful, we need to introduce web services, in the form of a REST interface. This is what Spring Boot is really used for and this introduces the idea of Micro Services.

Micro Services are an architectural concept that takes componentisation to a system level. A system is divided up into sub-systems, each of which is considered a system in its own right, with its own security, metrics etc., and even its own built-in web server. These components are connected together using REST web services, so the definitions of these interfaces become very important. The implementation of one sub-system, or Micro Service, is independent of others, providing the interfaces are adhered to, so you can have one service built using .Net and another in Java, yet another in Node.js and JavaScript. There are lots of technologies being introduced and maturing that support, take advantage of or are being taken advantage of by this concept, in particular Docker, for deployment, Spring Boot and Drop Wizard for Java development.

Now we’ll add a REST service to the previous data project. First, we remove the data code from the Application class:

  public static void main(String[] args) {
    SpringApplication.run(Application.class);
  }

Exactly as before, just without the data code. This has been moved to the web service class, MyController:

@RestController
public class PersonController {

  @Autowired
  private PersonRepository repository;

  @RequestMapping("/")
  public String index() {
    StringBuilder sb = new StringBuilder();
    sb.append("<!DOCTYPE html>").append("<html lang='en'>");
    sb.append("<head>").append("<meta charset='UTF-8'>");
    sb.append("<title>Something, something</title>").append("</head>");
    sb.append("<body>");
    sb.append("<table>");
    for (Person person : repository.findAll()) {
      sb.append("<tr>");
      sb.append("<td>").append(person.getId()).append("</td>");
      sb.append("<td>").append(person.getFirstName()).append("</td>");
      sb.append("<td>").append(person.getLastName()).append("</td>");
      sb.append("</tr>");
    }
    sb.append("</table>");

    sb.append("</body>");
    sb.append("</html>");
    return sb.toString();
  }
}

The class is designated a RestController, which tells Spring that it’s the class that handles http requests as REST requests. The RequestMapping annotation tells it which path to respond to, in this case the root. I’m responding with a string at this point and creating a simple web page as the payload.

Notice that the repository is automatically created by annotating the declaration with @Autowired.

The POM has also been added to with Spring Boot dependencies, spring-boot-starter-web and spring-boot-starter-jetty:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
  <exclusions>
    <exclusion>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-tomcat</artifactId>
    </exclusion>
  </exclusions>
</dependency>

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-jetty</artifactId>
</dependency>

The former seems to be mostly used to enable the web services, but also to prevent Tomcat being used. Instead it used Jetty, which is simpler. The only other thing needed is to tell Maven that this is a Boot application, rather than a Java one:

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

That’s pretty much it. If you now run up the application, it will start Jetty on port 8080 and the response will be a web page with the Persons loaded from the database.

JSON

Instead of delivering an HTML page as the payload, we can deliver a JSON string. This is all built into Spring Boot and is just a case of altering the definition of the index function:

  @RequestMapping("/")
  public Person[] index() {
    List personList = new ArrayList();
    personList.addAll((Collection) repository.findAll());
    return personList.toArray(new Person[0]);
  }

findAll returns an Iterable, so this needs to be converted to an array via ArrayList. The array of JobType objects is coded to JSON and sent as the response automatically.

Database Integration with Java

The other week I saw a presentation on InfoQ regarding Spring Data, so I thought I’d have a look at how Java connects to databases.

JDBC

The most basic way of connecting to a database in Java is to use JDBC (Java Database Connectivity). This involves creating suitable objects, establishing a connection to the database, executing a SQL statement and parsing the results:

try {
      Connection connection = null;
      ResultSet resultset = null;
      try {
        Class.forName("net.sourceforge.jtds.jdbc.Driver").newInstance();
        connection = DriverManager.getConnection("jdbc:jtds:sqlserver://localhost:1433/A_DB",
                                                 "aaron",
                                                 "********");
        Statement statement = connection.createStatement();
        resultset = statement.executeQuery("SELECT "
                                           + "firstName, "
                                           + "lastName, "
                                           + "gender "
                                           + "FROM person");
        Person person;
        while (resultset.next()) {
          person = new Person();
          person.setFirstName(resultset.getString("firstName"));
          person.setLastName(resultset.getString("lastName"));
          person.setGender(resultset.getString("gender"));
          System.out.println(person);
        }

      } finally {
        if (resultset != null) {
          resultset.close();
        }
        if (connection != null) {
          connection.close();
        }
      }
    } catch (ClassNotFoundException ex) {
      Logger.getLogger(ConnectingUsingJDBC.class.getName()).log(Level.SEVERE, null, ex);
    } catch (InstantiationException ex) {
      Logger.getLogger(ConnectingUsingJDBC.class.getName()).log(Level.SEVERE, null, ex);
    } catch (IllegalAccessException ex) {
      Logger.getLogger(ConnectingUsingJDBC.class.getName()).log(Level.SEVERE, null, ex);
    } catch (SQLException ex) {
      Logger.getLogger(ConnectingUsingJDBC.class.getName()).log(Level.SEVERE, null, ex);
    }

Legacy systems contain a lot of code just like this, or variations of. As you can see, there’s a lot of “boilerplate” code: connecting to the database; closing the connections and cleaning up afterwards; mapping the fields in the recordset to the attributes of the class, etc. However, it is standard with the JDK, which means you don’t need any third part library. You can also remove some of the boilerplate using the latest features such as multiple catches and try-with-resources:

try {
      Connection connection = null;
      ResultSet resultset = null
      Class.forName("net.sourceforge.jtds.jdbc.Driver").newInstance();
      try(connection = DriverManager.getConnection("jdbc:jtds:sqlserver://localhost:1433/A_DB",
                                                 "aaron",
                                                 "********");
        Statement statement = connection.createStatement();
        resultset = statement.executeQuery("SELECT "
                                           + "firstName, "
                                           + "lastName, "
                                           + "gender "
                                           + "FROM person")){
        Person person;
        while (resultset.next()) {
          person = new Person();
          person.setFirstName(resultset.getString("firstName"));
          person.setLastName(resultset.getString("lastName"));
          person.setGender(resultset.getString("gender"));
          System.out.println(person);
        }

      } finally {
        if (resultset != null) {
          resultset.close();
        }
        if (connection != null) {
          connection.close();
        }
      }
    } catch (InstantiationException | IllegalAccessException | SQLException | ClassNotFoundException ex) {
      Logger.getLogger(ConnectingUsingJDBC.class.getName()).log(Level.SEVERE, null, ex);
    }

Try-with-resources saves having to tidy up as it closes the resources for you. The multiple catch gain saves you some lines, but only if you do the same thing for every exception. This is now 34 lines as opposed to 53, but you still get the feeling that there’s an easier way to do this. And there is…

Hibernate

Look up the word “Object Relational Mapping” on Wikipedia and you get this:

“Object-relational mapping (ORM) is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language.”

In the list of ORM software for Java you’ll find a reference to Hibernate.

Essentially, what Hibernate does is map objects onto the result set fields, reducing the code even further, or, really, moving it somewhere else:

    Configuration config = new Configuration();
    config.configure();
    Session session = config.buildSessionFactory().openSession();

    Transaction tx = session.beginTransaction();
    try {
      List result = session.createQuery("from person").list();

      for (Person person : (List<Person>) result) {
        System.out.println(person);
      }
    } finally {
      tx.commit();
      session.close();
    }

The connection details are moved to an XML configuration file that, by default, is named hibernate.cfg.xml and contained in the root resources folder:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
  <session-factory>
    <property name="hibernate.connection.driver_class">net.sourceforge.jtds.jdbc.Driver</property>
    <property name="hibernate.connection.url">jdbc:jtds:sqlserver://localhost:1433/A_DB</property>
    <property name="hibernate.connection.username">aaron</property>
    <property name="hibernate.connection.password">********</property>
    <mapping resource="jobtype.hbm.xml"/>
  </session-factory>
</hibernate-configuration>

Notice that it refers to a “mapping resource”, in this case another XML file called “jobtype.hbm.xml”. This is where the details of the mapping are kept and it’s in the root folder again:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
  <class name="org.laughing.lemon.Person" table="person">
    <id column="ID" name="ID" type="int">
      <generator class="assigned"/>
    </id>
    <property column="firstName" name="firstName" type="string"/>
    <property column="lastName" name="lastName" type="string"/>
    <property column="gender" name="gender" type="string"/>
  </class>
</hibernate-mapping>

Notice that it’s mapped the class name to the table and the column names to the attribute names of the Person class. This, in turn, has to extend the Java Serializable interface:

import java.io.Serializable;

public class Person implements Serializable {

  private int ID;
  private String firstName;
  private String lastName;
  private String gender;

You still have to do the work (you can, in theory, leave the mapping if the column names and attribute names are the same and it will work out the mappings automatically, but better safe…), but it is now done in configuration files rather the Java. You also have to introduce some transaction handling for the first time, which was done automatically for you in the JDBC.

The problem with this is that it’s not a standard. There’s still a fair bit of messing around if I wanted to switch to different ORM software. To impose some kind of standardisation, Java now has JPA, the Java Persistence API.

JPA

This is not a library or framework of any kind, but a standard or specification that ORM’s now adhere to, usually by implementing a plug-in or extension, which is what Hibernate does. Thus Hibernate JPA:

    EntityManagerFactory entityManagerFactory =
          Persistence.createEntityManagerFactory("org.laughing.lemon");
    EntityManager entityManager = entityManagerFactory.createEntityManager();
    
    List<person> result = 
         entityManager.createQuery("from org.laughing.lemon.Person", Person.class).getResultList();
    for (Person person : result) {
      System.out.println(person);
    }

    entityManager.close();

Very similar to the Hibernate implementation and, like Hibernate, it takes the connection information from an XML file, in this case “persistence.xml”:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
  <persistence-unit name="org.laughing.lemon" transaction-type="RESOURCE_LOCAL">
    <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
    <class>org.laughing.lemon.Person</class>
    <properties>
      <property name="javax.persistence.jdbc.url" value="jdbc:jtds:sqlserver://localhost:1433/A_DB"/>
      <property name="javax.persistence.jdbc.password" value="********"/>
      <property name="javax.persistence.jdbc.driver" value="net.sourceforge.jtds.jdbc.Driver"/>
      <property name="javax.persistence.jdbc.user" value="aaron"/>
    </properties>
  </persistence-unit>
</persistence>

Notice that the persistence unit has a name that then gets used in the Java application. The mapping is done in the POJO class:

import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table(name="person")
public class Person implements Serializable {

  @Id
  @Column(name="id")
  private int ID;
  @Column(name="firstname")
  private String firstName;
  @Column(name="lastname")
  private String lastName;
  @Column(name="gender")
  private String gender;

Conclusion

So far, I’ve shown three seperate ways that a Java application can connect to a database. JDBC is the most direct and basic, with Hibernate and JPA abstracting the database table to Java objects.

Using PowerMock to Test Singletons

I’ve been trying to test a small suite of classes based around exception handling. One of the classes is a class factory implemented as a singleton, which has proved impossible to test with the usual techniques, i.e. sub-class and override and Mockito.

A class factory is a class used to create and initialise other classes. A singleton class is designed only to be instantiated once and then every use afterwards uses the same instance over and over again, rather than creating multiple copies of the same class.

Sub-class and override is a technique used to isolate various methods of a class during testing. Take a simple class like this:

public class DemoSuperClass {

    protected List createNameList() {
        List returnList = new ArrayList();

        returnList.add("Steve Stranger");
        returnList.add("James Jones");
        returnList.add("Harry Harnot");
        returnList.add("Bill Blogger");

        return returnList;
    } 

    public void printNameList() {
        List nameList = createNameList();
        for(String name: nameList)
            System.out.printLn(name.split(" ")[0]); //prints the first name
    }

}

Say you want to test printNameList but with an empty list of names, or different names. The best way to do this is in the test harness declare a sub-class of DemoSuperClass and override createNameList:

private class DemoSubClass { //class in the test harness

    @Override
    protected List createNameList() {
        List returnList = new ArrayList();
        return returnList;
    } 
}

Now the sub-class can be tested.

Even if the method being overridden is called by the constructor, it will still work:

public class DemoConstructorClass {

    protected DemoConstructorClass() {
        procedure1();
    }

    protected void procedure1() {
        System.out.println("procedure1");
    }
}

    //in the test harness
    private class TestDemoConstructor extends DemoConstructorClass {

        @Override
        protected void procedure1() {
            System.out.println("Overriden procedure1");
        }
        
    }

Where this won’t work is in testing singletons. The singleton I’m trying to test looks something like this:

public class SingletonClass {
    private static SingletonClass instance = new SingletonClass();

    public static SingletonClass getInstance() {
        return instance;
    }
    
    protected SingletonClass() {
        procedure1();
    }

    protected void procedure1() {
        System.out.println("procedure1 called");
    }
}

    //in use, say, in the test harness
    SingletonClass instance = SingletonClass.getInstance();

Notice that the constructor is protected and cannot be accessed from outside the class. The only way to create the class is through getInstance and that returns the private static instance field created when the class is first referenced. Sub-classing and overriding has no effect, because the call to constructor happens as soon as the class gets referenced and the private static instance gets created. Mockito spies don’t work either for the same reason. Enter PowerMock.

PowerMock is a set of libraries used to extend existing mock libraries, such as EasyMock and Mockito to cover situations where it’s difficult to use them, such as static or final methods and constructors. In particular, the MemberModifier.replace method allows us to override methods (but only static ones) without instantiating the class.

    public static void replacement() {
        System.out.println("replacement");
    }
    
    public void testGetInstance() {
        //procedure1 has to be static to be replaced
        MemberModifier.replace(MemberModifier.method(SingletonClass.class,
                                                     "procedure1"))
                .with(MemberModifier.method(this.getClass(),
                                            "replacement"));
        SingletonClass result = SingletonClass.getInstance();
    }

It’s a bit brute-force, and with better design of the singleton class maybe unnecessary, but at least it can now be tested.

BDD and JBehave

I’ve been in my new job for a few weeks now and encountered Behavior Driven Development in the code I’m looking at, using JBehave.

BDD is a way of structuring unit tests in such a way as to make them more human-readable. Typically, a conventional unit test looks something like this (this is for a small builder class I’ve created):

@Test
public void testOneWhere() {
    //select with one criteria
    SQLBuilder instance = new SQLBuilder();
    instance.select().table("A_TABLE");
    String result = instance.where("A_COLUMN", "A_VALUE").build();
    assertTrue("result is not proper SQL: " + result,
            result.equals("SELECT * FROM A_TABLE 
                           WHERE A_COLUMN = \"A_VALUE\""));
}

This is quite a small test, but it’s easy to see that someone non-technical would have difficulty following what was going on here. Plus it’s not very flexible: if you want to test boundary conditions, for example, you either have to do a lot of cut-and-pasting or some fancy footwork with the test libraries.

Compare this to the equivalent BDD statement (called stories):

Scenario: with a where clause

Given a new SQLBuilder object
When select is called
And table is set to A_TABLE
And where is set to A_COLUMN equals A_VALUE
And build is called
Then the result is SELECT * FROM TABLE WHERE A_COLUMN = "A_VALUE"

This is supported in the test harness by methods bound to the statements in the story:

@When("select is called")
public void selectIsCalled() {
    sqlBuilder.select();
}

@When("table is set to $table")
public void tableSetToTable(String table){
    sqlBuilder.table(table);
} //etc.

Not only is the story more readable, but you can parameterise the calls, setting up tables of inputs and expected outputs. Nice.

It’s disdvantages seem to be that it does not have stories that can define throwing exceptions, but the idea is that exceptions should not occur at this level and should be handled lower down. Finer grain unit tests would probably handle this.

Dependency Injection with Google Guice

One of the problems that we get in software development, specifically in the design of software classes, is that of the connection between classes. If one class uses another directly, even though it works, this can cause problems later on and is known as “close coupling”. Here is a mundane example:

//a really dumb message sending class
public class MessageSender {
    public void sendMessage(String message) {
        System.out.println(message);
    }
}

//another dumb class that uses the message sender
public class MessageUser {
    //a reference to the sender object
    private MessageSender messageSender;
    //the message sender object is created in the constructor
    public MessageUser() {
        this.messageSender = new MessageSender();
    }
    //send the message via the sender
    public void sendMessage(String message) {
        messageSender.sendMessage(message);
    }
}

//the main class that uses the message classes
public class MessageMain {
    public static void main(String[] args) {
        MessageUser messageUser = new MessageUser();
        messageUser.sendMessage("This is a test");
    }
}

OK, this is really mundane, but can see that MessageUser cannot use any other way of sending messages. If you want to use another means, say email, you’d have to either change what MessageSender does or change MessageUser to use a different class, call it EMailSender. However, we can now use an interface instead of a class:

public interface MessageSender {
    public void sendMessage(String message);
}

//an implementation of the interface
public class SystemMessageSender implements MessageSender {
    public void sendMessage(String message) {
        System.out.println(message);
    }
}

You still have to change MessageUser, but to use an interface given to, or “injected” into, the MessageUser object via the constructor:

public class MessageUser {
    //a reference to the interface
    private MessageSender messageSender;
    //the interface is sent to the object using the constructor
    public MessageUser(MessageSender messageSender) {
        this.messageSender = messageSender;
    }
    
    public void sendMessage(String message) {
        messageSender.sendMessage(message);
    }
}

This works if we then, in the main class, create the object that implements the MessageSender and pass it to, or inject it into, the MessageUser object:

public class MessageMain {
    public static void main(String[] args) {
        SystemMessageSender messageSender = new SystemMessageSender();
        MessageUser messageUser = new MessageUser(messageSender);
        messageUser.sendMessage("This is a test");
    }
}

Now if we want to have MessageUser send emails we create a class which also implements the MessageSend interface:

//an implementation of the interface
public class EMailMessageSender implements MessageSender {
    public void sendMessage(String message) {
        EMail.textMessage(message);
    }
}

All that then has to be done is create an object of this class in the main class and then pass it to MessageUser, as before:

        EMailMessageSender messageSender = new EMailMessageSender();
        MessageUser messageUser = new MessageUser(messageSender);
        messageUser.sendMessage("This is a test");

This technique, known as Dependency Injection, is quite an important design pattern and is mandatory in certain frameworks, such as Spring.

Introducing Google Guice

So far, so good, and it’s difficult to see how this can be really improved upon. However, Guice (pronounced with a J rather than a G) does for dependency injection what Mockito does for unit testing. To introduce Guice into the above example, we have to create another class, an extension of Guice’s AbstractModule class:

public class MessageModule extends AbstractModule {
    protected void configure() {
        bind(MessageSender.class).to(SystemMessageSender.class);
    }
}

You can sort-of see what’s going on. The module is responsible for creating the class that implements the interface, so whenever the interface is used, the object is bound to it. The magic happens in the Injector, used in the main class:

        Injector injector = Guice.createInjector(new MessageModule());
        MessageUser messageUser = injector.getInstance(MessageUser.class);
        messageUser.sendMessage("This is a test");

Notice that the module hasn’t been told about MessageUser, so the Injector is figuring out it’s dependencies from the constructor of the class, and this also has to change using the @Inject annotation:

    @Inject
    public MessageUser(MessageSender messageSender) {
        this.messageSender = messageSender;
    }

Now all this doesn’t seem like a big deal, if anything we’ve added lines and classes, but if now want to change it to send emails, all you have to do is change the Module:

public class MessageModule extends AbstractModule {
    protected void configure() {
        bind(MessageSender.class).to(EMailMessageSender.class);
    }
}

This concentrates the dependecy injection in one place, the module, which is acting as a sophisticated class factory. Guice is useful if you’ve got a lot of classes with umpteen dependencies and can save you a lot of work.

Mockito Spies

That is the collective noun, not the verb, although I suppose Mockito is spying…

Spies are used to make existing classes into Mockito classes. Say your class under test hooks into a Java listener, so you want some kind of dummy listener for it to hook into. You can’t do this with Mockito (at least I can’t find a way), but you can make your own:


//listener interface
public interface SomethingListener {
    //change that sends the event
    public void somethingChanged(EventObject e);
}

//our little dummy listener provider
public class DummyListenerProvider {

    private SwitchListener listener;

    //this is used by the class under test
    public void addSwitchListener(SwitchListener listener) {
        this.listener = listener;
    }

    //send event, saying the class is the source
    public void triggerEvent() {
        this.listener.somethingChanged(new EventObject(this));
    }

    //used by the event consumer when the event occurs
    public void switchMeOn(boolean isTrue) {
    }
}

Cool, but you still want all the nice little trinkets that come with the Mockito objects. This is where spies come in. You just wrapper the class using spy instead of mock:


DummyListenerProvider spyListenerProvider = spy(new DummyListenerProvider());

ClassUnderTest underTest = new ClassUnderTest(spyListenerProvider);

spyListenerProvider.triggerEvent();

verify(spyListenerProvider).switchMeOn(true);

Mockito – An Elegant Stub for a More Civilised Age

In software development, we make use of something called “stubs”. These are bits of code that don’t do anything at the moment, but allow us to continue coding elsewhere as if they do. They are also used in testing to substitute for more complex objects, and this is where “mocking” (creating “mock” objects) has been developed in object-oriented programming and test-driven development.

As an extension to the college work I’ve been doing, I’ve been looking into using mock libraries for unit testing and I’ve come across Mockito:

The mock objects it creates are, well, amazing to say the least. Say we’ve got an interface that, in turn, allows us to create a read I/O stream and a write I/O stream, for networking. The interface would look something like this:

//simple interface wrapper around a network socket
public interface MySocket {
    //creates a read I/O stream
    public BufferedReader getBufferedReader() throws IOException;
    //creates a write I/O stream
    public PrintWriter getPrintWriter() throws IOException;
    //indicates that a connection has been successfully made
    public boolean isConnected();
    //closes the connection
    public void close() throws IOException;
}

Now I can actually mock this interface with Mockito in the unit test:

import static org.mockito.Mockito.*;
...
MySocket socket = mock(MySocket.class);

You can now use this mock socket interface in whatever need the interface, so:

SocketThread socketThread = new SocketThread(socket);

Now, unfortunately, the mock doesn’t really do anything other than just record what happens to it. Say SocketThread calls socket.close internally when it runs:

//this is in SocketThread.run
socket.close();
...
//if this isn't true, you'll get an exception
verify(socket).close();

Useful, but it becomes a bit more powerful when we add mocks for the input and output streams:

PrintWriter printWriter = mock(PrintWriter.class);
BufferedReader bufferedReader = mock(BufferedReader.class);
when(socket.getBufferedReader()).thenReturn(bufferedReader);
when(socket.getPrintWriter()).thenReturn(printWriter);

You can control exactly what the mock does and when it does it. Very useful for unit testing (and I haven’t even begun to use all it’s capabilities).

Android Livecode, RxJava Architectures on Android at Microsoft

This was an Android Livecode presentation I attended, hosted at the Microsoft offices in sunny Westminster.

After the introduction, the main presentation was given by a Berliner, Timo Tuominen from Futurice, concerning the use of Functional Reactive programming on Android. This centred around the use of RxJava, an open source library based on a Microsoft library and implemented by Netflix, the video streaming company.

(A well attended presentation, as you can see, but there was free beer and pizza).

RxJava is centered around the use of an Observable class i.e. an implementation of the Observer design pattern from the “Gang of Four” book, on which most event model are based. It seems that the difference between, say, the standard Java listeners and RxJava is that the listeners have to be implemented within the class as you’re coding it. Here’s something I’ve been working on for the College:

//custom event maintenance
//this is a list of events that have to
//be serviced when something happens
private List<SwitchListener> listenerList = new ArrayList<SwitchListener>();

public void addSwitchListener(SwitchListener listener) {
    listenerList.add(listener);
}

public void removeSwitchListener(SwitchListener listener) {
    listenerList.remove(listener);
}

private void fireSwitchListener() {
    //parcel up the state of the switch in a SwitchEvent object
    SwitchEvent e = new SwitchEvent(this, isSwitchOn());
    //then send it to every listener in the list
    for(SwitchListener listener: listenerList) {
        listener.switchChange(e);
    }
}

Listeners are all over Java components as it provides a way of passing messages between the components and their containers without the components having references to the latter (a problem known as “coupling”).

What RxJava seems to do is wrapper up listeners, or something like them, into a framework that can be applied to any other object. For example, if you want to “listen” to an object so that when it changes you can do something else, or react to it, you can use the Observable class to do it:

Observable.from(names).subscribe(new Action1<String>() {
    public void call(String s) {
        System.out.println("Hello " + s + "!");
    }
});

It does seem to be something that you would think was already in function programming languages, like Clojure and Scala, by default, but RxJava can be used by these, so maybe it isn’t?