Java

London Java Community Open Conference 2015

Yesterday, I saved up enough pennies to go to the annual LJC Open Conference at the IBM building, right next to the National Theatre on London’s South Bank, and done in the same brutalist style.

Open Conferences, sometimes called Un-conferences, are where there is no set schedule, other than perhaps, as here, a key-note to start. It’s left to the participants to organise the presentations and workshops.

The Conference Key-note: 20 Years of Java

Starting off with the key note by Simon Ritter and Steve Elliot, both former employees of Sun Micro-Systems (Steve still works for Oracle, Simon does not). This covered Java since it’s launch in 1995, showing how the language has evolved from the 200 odd classes in version 1 to the 4000+ in the latest version, 8. They commented that this bloated situation would improve with the introduction of “Jigsaw”, the modularisation of the JDK.

Performance by the Numbers with Simon Maple

The first presentation I attended concerned a survey conducted by Simon and his company to see how much performance testing was done by the profession and what effect it had. Unsurprisingly, few tested regularly and most tested in reaction to performance bottlenecks when they were found in production. Most performance bugs were connected to databases as much, if not slightly more, than the applications themselves: JPA and Hibernate being the worse culprits in this regard. The lesson here being performance test early, often and with the right tools.

Microservices in the Real World with Chris Batey

Given the popularity at the moment, and linked, to some extent, with the previous talk, Chris took us through his experiences with microservices, especially dealing with outages and his efforts to make the applications more robust (he recommended Netfix’s Hysterix). His slides of the talk can be found here.

Lightning Talks and Lunch

Both before and after lunch there were lightning talks, five minutes where anyone could talk about almost anything. One involved the use of Microsoft’s Kinetix to create ghostly 3D images, like a cut-price 3D scanner. Another mentioned a book called “Godel, Escher, Bach: An Eternal Golden Braid” by Douglas Hofstadter, with a view to writing every algorithm that ever could be written.

JDK8 Lambdas and Streams: Beyond the Basics with Simon Ritter

The first proper presentation after lunch was regarding the new features of JDK 8. Simon gave us a few examples of how he has changed his style of programming from the procedural style to the new functional style using lambdas and, especially, streams. In particular, he showed how using the Supplier interface could allow lazy evaluation, something normally only available in interpreted or functional languages such Scala, and how to design and use accumulators for use with Streams.

IoT 4 Java with Carl Jokl

Having been to the presentation earlier in the week on the BBC micro:bit, I was interested to see what the LJC was doing about this. Unfortunately, it’s all at the very early stages at the moment and Carl didn’t have much to show other than a few concepts: it would have been nice to see some actual things.

Generics: Past, Present and Future with Richard Warburton

Lastly, Richard gave a short presentation on the use of Generics and how they might evolve with future versions. Generics are Java’s way of implementing meta-classes, classes built around other classes, Collections being a usual example. Java 7 introduced intersection types to generics, which can make things even more confusing, so you get code like this:

private static <I extends DataInput & Closeable> PersonWithGenerics read(I source)

Allowing any object that implements both DataInput and Closable in it’s hierarchy of classes and interfaces to be used. It’s a bit like a multiple inheritance workaround.

Richard also showed us was an actual use of the “super” keyword in Generics, something you don’t often get. I won’t go into it here, as it was somewhat involved, but he put the code for the session on Github.

He also mentioned some ideas for future versions, including Generics for primitive types (not just the objects of types, as at the moment).

Conclusion

Was it worth the £25? Yes, I think so, and 200 other people thought so too. The presentations were of reasonable quality, what I saw, and better than you would expect. Perhaps because it’s less contrived than a more expensive conference.

London Java Community, Docklands – Swagger and Hazlecast

Tonight I went along to see two presentations hosted by the London Java Community Docklands branch at Credit Suisse, all wood panelling and soft carpets in the lobby:

First up was David Garry, whose soft Irish tones introduced us to Swagger, an open source REST API definition language and tool suite. You can either reverse engineer you existing API into Swagger or create a new one and generate code from that. The web site has a rather natty online editor:

which will allow you to roll your own definition as well as creating the source code for both server and client. Shown below is some the code produced for SpringMVC:

@Controller
@RequestMapping(value = "/addresses", produces = {APPLICATION_JSON_VALUE})
@Api(value = "/addresses", description = "the addresses API")
@javax.annotation.Generated(value = "class io.swagger.codegen.languages.SpringMVCServerCodegen", date = "2015-10-13T22:20:28.432Z")
public class AddressesApi {

  @ApiOperation(value = "Addresses", 
                notes = "A collection of addresses.", 
				response = Address.class, 
				responseContainer = "List")
  @ApiResponses(value = { 
    @ApiResponse(code = 200, message = "An array of addresses"),
    @ApiResponse(code = 200, message = "Unexpected error") })
  @RequestMapping(value = "", 
                  method = RequestMethod.POST)
  public ResponseEntity<List<Address>> addressesPost(@ApiParam(value = "Latitude component of location.", 
		required = true) 
		@RequestParam(value = "latitude", required = true) Double latitude

It’s a mess of annotations, as you can see, but I suppose if you’ve got a couple of hundred of these definitions, and you have to keep them all documented, Swagger can come in handy.

Next was David Brimley who introduced us to Hazelcast. A few months ago, Pakt had a free book offer and one I downloaded was on an introduction to the subject:

It’s a kind of middleware for data services, styled as an “In-memory Data Grid”, implying caching of data. It does implement the JCache standard (JSR107), but it also includes a messaging and event API, allowing you to be notified of changes in data; clustering and scaling of data “nodes”; and remote code execution (imagine Java stored procedures… sort of). David gave us a very thorough overview, given the limited time available to him.

Spring Boot and Spring Data

Continuing on from the previous article, I finally managed to get to Spring Data. In a way, this is just Spring JPA and looks like this:

@Configuration
@EnableAutoConfiguration
public class Application {

  public static void main(String[] args) {

    ConfigurableApplicationContext context = SpringApplication.run(Application.class);
    PersonRepository repository = context.getBean(PersonRepository.class);

    for (Person person : repository.findAll()) {
      System.out.println(person);
    }
    context.close();
  }
}

Although this looks similar to the Hibernate JPA previously, there are some significant changes. The first is the use of a Repository interface, PersonRepository. This has to descend from the Spring Data Repository interface, usually via another interface:

public interface PersonRepository extends CrudRepository<Person, Long> {
}

I’ve used CrudRepository, the most basic, and given it the class name of the POJO and the type of the unique identifier, which is a Long. Notice that you have to use the class equivalent of the primitive type. You don’t need to define anything else if you don’t want to; CrudRepository defines the basic functions as standard:

public interface CrudRepository<T extends Object, ID extends Serializable> extends Repository<T, ID> {
  public <S extends T> S save(S s);
  public <S extends T> Iterable<S> save(Iterable<S> itrbl);
  public T findOne(ID id);
  public boolean exists(ID id);
  public Iterable<T> findAll();
  public Iterable<T> findAll(Iterable<ID> itrbl);
  public long count();
  public void delete(ID id);
  public void delete(T t);
  public void delete(Iterable<? extends T> itrbl);
  public void deleteAll();
}

As before, the field mappings have to be put directly in the POJO using annotations. Something to remember is that with Spring JPA you have to make sure the column names are lower case.

Next you have to configure the connection properties. This goes in a file called “application.properties”, the default:

spring.datasource.url=jdbc:jtds:sqlserver://localhost:1433/A_DB
spring.datasource.username=aaron
spring.datasource.password=********
spring.datasource.driverClassName=net.sourceforge.jtds.jdbc.Driver

You may have noticed in the application definition is the use of the EnableAutoConfiguration annotation. This is from Spring Boot, which is our next subject.

Spring Boot

So far, behind the scenes, so to speak, we’ve been adding dependencies into the Maven POM file with some abandon, but, to get Spring Boot working, we have to remove them. Why use Spring Boot? I can’t really give you a straight answer, other than to say there are a lot fewer problems when using Spring Data. Take my word for it.

As described in the starting guide instruction on the Spring Boot web site, you have to add the following:

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>1.1.8.RELEASE</version>
</parent>

… which goes in the project root. Next you want something to tell Spring Boot to pull in the Data JPA layer:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

… which goes in the dependencies. Clear everything else out of the dependencies: this will cause all kinds of errors in the code, but resist the temptation to fix this by pulling in the dependencies. You will regret it if you don’t. Do a build (or clean and build), you’ll find that new dependencies have been pulled in via the parent and you don’t have to define anything. This also sorts out any import errors.

The dependencies that do have to be put in are specific to this particular project and that is the JTDS driver for MS SQL Server:

<dependency>
  <groupId>net.sourceforge.jtds</groupId>
  <artifactId>jtds</artifactId>
  <version>1.2</version>
</dependency>

You will be surprised to find that all this works straight away with no other configuration.

Spring Boot Web Services

So far, we’ve just been playing with the data, outputting the results to the console. To get it to do something useful, we need to introduce web services, in the form of a REST interface. This is what Spring Boot is really used for and this introduces the idea of Micro Services.

Micro Services are an architectural concept that takes componentisation to a system level. A system is divided up into sub-systems, each of which is considered a system in its own right, with its own security, metrics etc., and even its own built-in web server. These components are connected together using REST web services, so the definitions of these interfaces become very important. The implementation of one sub-system, or Micro Service, is independent of others, providing the interfaces are adhered to, so you can have one service built using .Net and another in Java, yet another in Node.js and JavaScript. There are lots of technologies being introduced and maturing that support, take advantage of or are being taken advantage of by this concept, in particular Docker, for deployment, Spring Boot and Drop Wizard for Java development.

Now we’ll add a REST service to the previous data project. First, we remove the data code from the Application class:

  public static void main(String[] args) {
    SpringApplication.run(Application.class);
  }

Exactly as before, just without the data code. This has been moved to the web service class, MyController:

@RestController
public class PersonController {

  @Autowired
  private PersonRepository repository;

  @RequestMapping("/")
  public String index() {
    StringBuilder sb = new StringBuilder();
    sb.append("<!DOCTYPE html>").append("<html lang='en'>");
    sb.append("<head>").append("<meta charset='UTF-8'>");
    sb.append("<title>Something, something</title>").append("</head>");
    sb.append("<body>");
    sb.append("<table>");
    for (Person person : repository.findAll()) {
      sb.append("<tr>");
      sb.append("<td>").append(person.getId()).append("</td>");
      sb.append("<td>").append(person.getFirstName()).append("</td>");
      sb.append("<td>").append(person.getLastName()).append("</td>");
      sb.append("</tr>");
    }
    sb.append("</table>");

    sb.append("</body>");
    sb.append("</html>");
    return sb.toString();
  }
}

The class is designated a RestController, which tells Spring that it’s the class that handles http requests as REST requests. The RequestMapping annotation tells it which path to respond to, in this case the root. I’m responding with a string at this point and creating a simple web page as the payload.

Notice that the repository is automatically created by annotating the declaration with @Autowired.

The POM has also been added to with Spring Boot dependencies, spring-boot-starter-web and spring-boot-starter-jetty:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
  <exclusions>
    <exclusion>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-tomcat</artifactId>
    </exclusion>
  </exclusions>
</dependency>

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-jetty</artifactId>
</dependency>

The former seems to be mostly used to enable the web services, but also to prevent Tomcat being used. Instead it used Jetty, which is simpler. The only other thing needed is to tell Maven that this is a Boot application, rather than a Java one:

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

That’s pretty much it. If you now run up the application, it will start Jetty on port 8080 and the response will be a web page with the Persons loaded from the database.

JSON

Instead of delivering an HTML page as the payload, we can deliver a JSON string. This is all built into Spring Boot and is just a case of altering the definition of the index function:

  @RequestMapping("/")
  public Person[] index() {
    List personList = new ArrayList();
    personList.addAll((Collection) repository.findAll());
    return personList.toArray(new Person[0]);
  }

findAll returns an Iterable, so this needs to be converted to an array via ArrayList. The array of JobType objects is coded to JSON and sent as the response automatically.

Using PowerMock to Test Singletons

I’ve been trying to test a small suite of classes based around exception handling. One of the classes is a class factory implemented as a singleton, which has proved impossible to test with the usual techniques, i.e. sub-class and override and Mockito.

A class factory is a class used to create and initialise other classes. A singleton class is designed only to be instantiated once and then every use afterwards uses the same instance over and over again, rather than creating multiple copies of the same class.

Sub-class and override is a technique used to isolate various methods of a class during testing. Take a simple class like this:

public class DemoSuperClass {

    protected List createNameList() {
        List returnList = new ArrayList();

        returnList.add("Steve Stranger");
        returnList.add("James Jones");
        returnList.add("Harry Harnot");
        returnList.add("Bill Blogger");

        return returnList;
    } 

    public void printNameList() {
        List nameList = createNameList();
        for(String name: nameList)
            System.out.printLn(name.split(" ")[0]); //prints the first name
    }

}

Say you want to test printNameList but with an empty list of names, or different names. The best way to do this is in the test harness declare a sub-class of DemoSuperClass and override createNameList:

private class DemoSubClass { //class in the test harness

    @Override
    protected List createNameList() {
        List returnList = new ArrayList();
        return returnList;
    } 
}

Now the sub-class can be tested.

Even if the method being overridden is called by the constructor, it will still work:

public class DemoConstructorClass {

    protected DemoConstructorClass() {
        procedure1();
    }

    protected void procedure1() {
        System.out.println("procedure1");
    }
}

    //in the test harness
    private class TestDemoConstructor extends DemoConstructorClass {

        @Override
        protected void procedure1() {
            System.out.println("Overriden procedure1");
        }
        
    }

Where this won’t work is in testing singletons. The singleton I’m trying to test looks something like this:

public class SingletonClass {
    private static SingletonClass instance = new SingletonClass();

    public static SingletonClass getInstance() {
        return instance;
    }
    
    protected SingletonClass() {
        procedure1();
    }

    protected void procedure1() {
        System.out.println("procedure1 called");
    }
}

    //in use, say, in the test harness
    SingletonClass instance = SingletonClass.getInstance();

Notice that the constructor is protected and cannot be accessed from outside the class. The only way to create the class is through getInstance and that returns the private static instance field created when the class is first referenced. Sub-classing and overriding has no effect, because the call to constructor happens as soon as the class gets referenced and the private static instance gets created. Mockito spies don’t work either for the same reason. Enter PowerMock.

PowerMock is a set of libraries used to extend existing mock libraries, such as EasyMock and Mockito to cover situations where it’s difficult to use them, such as static or final methods and constructors. In particular, the MemberModifier.replace method allows us to override methods (but only static ones) without instantiating the class.

    public static void replacement() {
        System.out.println("replacement");
    }
    
    public void testGetInstance() {
        //procedure1 has to be static to be replaced
        MemberModifier.replace(MemberModifier.method(SingletonClass.class,
                                                     "procedure1"))
                .with(MemberModifier.method(this.getClass(),
                                            "replacement"));
        SingletonClass result = SingletonClass.getInstance();
    }

It’s a bit brute-force, and with better design of the singleton class maybe unnecessary, but at least it can now be tested.

BDD and JBehave

I’ve been in my new job for a few weeks now and encountered Behavior Driven Development in the code I’m looking at, using JBehave.

BDD is a way of structuring unit tests in such a way as to make them more human-readable. Typically, a conventional unit test looks something like this (this is for a small builder class I’ve created):

@Test
public void testOneWhere() {
    //select with one criteria
    SQLBuilder instance = new SQLBuilder();
    instance.select().table("A_TABLE");
    String result = instance.where("A_COLUMN", "A_VALUE").build();
    assertTrue("result is not proper SQL: " + result,
            result.equals("SELECT * FROM A_TABLE 
                           WHERE A_COLUMN = \"A_VALUE\""));
}

This is quite a small test, but it’s easy to see that someone non-technical would have difficulty following what was going on here. Plus it’s not very flexible: if you want to test boundary conditions, for example, you either have to do a lot of cut-and-pasting or some fancy footwork with the test libraries.

Compare this to the equivalent BDD statement (called stories):

Scenario: with a where clause

Given a new SQLBuilder object
When select is called
And table is set to A_TABLE
And where is set to A_COLUMN equals A_VALUE
And build is called
Then the result is SELECT * FROM TABLE WHERE A_COLUMN = "A_VALUE"

This is supported in the test harness by methods bound to the statements in the story:

@When("select is called")
public void selectIsCalled() {
    sqlBuilder.select();
}

@When("table is set to $table")
public void tableSetToTable(String table){
    sqlBuilder.table(table);
} //etc.

Not only is the story more readable, but you can parameterise the calls, setting up tables of inputs and expected outputs. Nice.

It’s disdvantages seem to be that it does not have stories that can define throwing exceptions, but the idea is that exceptions should not occur at this level and should be handled lower down. Finer grain unit tests would probably handle this.

Dependency Injection with Google Guice

One of the problems that we get in software development, specifically in the design of software classes, is that of the connection between classes. If one class uses another directly, even though it works, this can cause problems later on and is known as “close coupling”. Here is a mundane example:

//a really dumb message sending class
public class MessageSender {
    public void sendMessage(String message) {
        System.out.println(message);
    }
}

//another dumb class that uses the message sender
public class MessageUser {
    //a reference to the sender object
    private MessageSender messageSender;
    //the message sender object is created in the constructor
    public MessageUser() {
        this.messageSender = new MessageSender();
    }
    //send the message via the sender
    public void sendMessage(String message) {
        messageSender.sendMessage(message);
    }
}

//the main class that uses the message classes
public class MessageMain {
    public static void main(String[] args) {
        MessageUser messageUser = new MessageUser();
        messageUser.sendMessage("This is a test");
    }
}

OK, this is really mundane, but can see that MessageUser cannot use any other way of sending messages. If you want to use another means, say email, you’d have to either change what MessageSender does or change MessageUser to use a different class, call it EMailSender. However, we can now use an interface instead of a class:

public interface MessageSender {
    public void sendMessage(String message);
}

//an implementation of the interface
public class SystemMessageSender implements MessageSender {
    public void sendMessage(String message) {
        System.out.println(message);
    }
}

You still have to change MessageUser, but to use an interface given to, or “injected” into, the MessageUser object via the constructor:

public class MessageUser {
    //a reference to the interface
    private MessageSender messageSender;
    //the interface is sent to the object using the constructor
    public MessageUser(MessageSender messageSender) {
        this.messageSender = messageSender;
    }
    
    public void sendMessage(String message) {
        messageSender.sendMessage(message);
    }
}

This works if we then, in the main class, create the object that implements the MessageSender and pass it to, or inject it into, the MessageUser object:

public class MessageMain {
    public static void main(String[] args) {
        SystemMessageSender messageSender = new SystemMessageSender();
        MessageUser messageUser = new MessageUser(messageSender);
        messageUser.sendMessage("This is a test");
    }
}

Now if we want to have MessageUser send emails we create a class which also implements the MessageSend interface:

//an implementation of the interface
public class EMailMessageSender implements MessageSender {
    public void sendMessage(String message) {
        EMail.textMessage(message);
    }
}

All that then has to be done is create an object of this class in the main class and then pass it to MessageUser, as before:

        EMailMessageSender messageSender = new EMailMessageSender();
        MessageUser messageUser = new MessageUser(messageSender);
        messageUser.sendMessage("This is a test");

This technique, known as Dependency Injection, is quite an important design pattern and is mandatory in certain frameworks, such as Spring.

Introducing Google Guice

So far, so good, and it’s difficult to see how this can be really improved upon. However, Guice (pronounced with a J rather than a G) does for dependency injection what Mockito does for unit testing. To introduce Guice into the above example, we have to create another class, an extension of Guice’s AbstractModule class:

public class MessageModule extends AbstractModule {
    protected void configure() {
        bind(MessageSender.class).to(SystemMessageSender.class);
    }
}

You can sort-of see what’s going on. The module is responsible for creating the class that implements the interface, so whenever the interface is used, the object is bound to it. The magic happens in the Injector, used in the main class:

        Injector injector = Guice.createInjector(new MessageModule());
        MessageUser messageUser = injector.getInstance(MessageUser.class);
        messageUser.sendMessage("This is a test");

Notice that the module hasn’t been told about MessageUser, so the Injector is figuring out it’s dependencies from the constructor of the class, and this also has to change using the @Inject annotation:

    @Inject
    public MessageUser(MessageSender messageSender) {
        this.messageSender = messageSender;
    }

Now all this doesn’t seem like a big deal, if anything we’ve added lines and classes, but if now want to change it to send emails, all you have to do is change the Module:

public class MessageModule extends AbstractModule {
    protected void configure() {
        bind(MessageSender.class).to(EMailMessageSender.class);
    }
}

This concentrates the dependecy injection in one place, the module, which is acting as a sophisticated class factory. Guice is useful if you’ve got a lot of classes with umpteen dependencies and can save you a lot of work.

Mockito – An Elegant Stub for a More Civilised Age

In software development, we make use of something called “stubs”. These are bits of code that don’t do anything at the moment, but allow us to continue coding elsewhere as if they do. They are also used in testing to substitute for more complex objects, and this is where “mocking” (creating “mock” objects) has been developed in object-oriented programming and test-driven development.

As an extension to the college work I’ve been doing, I’ve been looking into using mock libraries for unit testing and I’ve come across Mockito:

The mock objects it creates are, well, amazing to say the least. Say we’ve got an interface that, in turn, allows us to create a read I/O stream and a write I/O stream, for networking. The interface would look something like this:

//simple interface wrapper around a network socket
public interface MySocket {
    //creates a read I/O stream
    public BufferedReader getBufferedReader() throws IOException;
    //creates a write I/O stream
    public PrintWriter getPrintWriter() throws IOException;
    //indicates that a connection has been successfully made
    public boolean isConnected();
    //closes the connection
    public void close() throws IOException;
}

Now I can actually mock this interface with Mockito in the unit test:

import static org.mockito.Mockito.*;
...
MySocket socket = mock(MySocket.class);

You can now use this mock socket interface in whatever need the interface, so:

SocketThread socketThread = new SocketThread(socket);

Now, unfortunately, the mock doesn’t really do anything other than just record what happens to it. Say SocketThread calls socket.close internally when it runs:

//this is in SocketThread.run
socket.close();
...
//if this isn't true, you'll get an exception
verify(socket).close();

Useful, but it becomes a bit more powerful when we add mocks for the input and output streams:

PrintWriter printWriter = mock(PrintWriter.class);
BufferedReader bufferedReader = mock(BufferedReader.class);
when(socket.getBufferedReader()).thenReturn(bufferedReader);
when(socket.getPrintWriter()).thenReturn(printWriter);

You can control exactly what the mock does and when it does it. Very useful for unit testing (and I haven’t even begun to use all it’s capabilities).

Android Livecode, RxJava Architectures on Android at Microsoft

This was an Android Livecode presentation I attended, hosted at the Microsoft offices in sunny Westminster.

After the introduction, the main presentation was given by a Berliner, Timo Tuominen from Futurice, concerning the use of Functional Reactive programming on Android. This centred around the use of RxJava, an open source library based on a Microsoft library and implemented by Netflix, the video streaming company.

(A well attended presentation, as you can see, but there was free beer and pizza).

RxJava is centered around the use of an Observable class i.e. an implementation of the Observer design pattern from the “Gang of Four” book, on which most event model are based. It seems that the difference between, say, the standard Java listeners and RxJava is that the listeners have to be implemented within the class as you’re coding it. Here’s something I’ve been working on for the College:

//custom event maintenance
//this is a list of events that have to
//be serviced when something happens
private List<SwitchListener> listenerList = new ArrayList<SwitchListener>();

public void addSwitchListener(SwitchListener listener) {
    listenerList.add(listener);
}

public void removeSwitchListener(SwitchListener listener) {
    listenerList.remove(listener);
}

private void fireSwitchListener() {
    //parcel up the state of the switch in a SwitchEvent object
    SwitchEvent e = new SwitchEvent(this, isSwitchOn());
    //then send it to every listener in the list
    for(SwitchListener listener: listenerList) {
        listener.switchChange(e);
    }
}

Listeners are all over Java components as it provides a way of passing messages between the components and their containers without the components having references to the latter (a problem known as “coupling”).

What RxJava seems to do is wrapper up listeners, or something like them, into a framework that can be applied to any other object. For example, if you want to “listen” to an object so that when it changes you can do something else, or react to it, you can use the Observable class to do it:

Observable.from(names).subscribe(new Action1<String>() {
    public void call(String s) {
        System.out.println("Hello " + s + "!");
    }
});

It does seem to be something that you would think was already in function programming languages, like Clojure and Scala, by default, but RxJava can be used by these, so maybe it isn’t?

Java Concurrent Animated

I thought I take a trip into town this evening to see a presentation at Skills Matter by Victor Grazi (complete with “Brooklyn accent”) concerning Java concurrency.

Concurrency is always a difficult subject, but not so much the thing itself as what happens when concurrent processes try to access the same resources at the same time. Normally, in Java, this is done via the “synchronized” command. You designate a block of code and wrapper it in a synchronized block:

synchronized(this) {
    Counter.count++;
    System.out.print(Counter.count + "  ");
}

or declare a function to be synchronized:

public synchronized void assign(int i) {
    val = i;
}

However, if one thread gets to the block first, it locks out the any other threads trying to do the same. There’s nothing that the others can do about it other than wait.

Victor’s presentation was regarding what was available in the Java API, such as Locks, Semaphores, to get around this or avoid it altogether, as well as exploring other aspects of the API such as Thread Pooling.

What made this presentation a little different was the animation of the threads, which were doing little animated circuits to illustrate his points. The source can be found on SourceForge. It looks cute, plus is does show you the various options available to the Java programmer other than just the ubiquitous “synchronized”.

Overall, Victor gave a very good presentation of a difficult, but important, subject.

Creating a Custom Java AWT Component – Part 2: Resizing and Segment Control

The whole point of the control is to display a number, so inputting a number is just a case of setting a display value:

private int displayNumber = 8;

public int getDisplayNumber() {
    return displayNumber;
}

public void setDisplayNumber(int displayNumber) {
    this.displayNumber = displayNumber % 10;
    repaint();
}

A simple enough bean compatible getter and setter, but notice that in the setter, I’ve modulo 10, to restrict the input to single digits. I’ve also called the repaint method to re-display the component.

Now looking at the paint method, I only display a segment if it’s used for a particular number. To determine this, I use the Arrays.binarySearch method on an array of the relevant numbers. For example, on the bottom left segment, which is only used by 2, 6, 8 and zero:

//bottom left
if(Arrays.binarySearch(new int[] {0,2,6,8}, displayNumber) > -1) {
    polygon = createSegment(createSegmentPoints(0, segmentHeight * 2 + segmentWidth, segmentHeight, segmentWidth));
    g2.fill(polygon);
}

Perhaps this is a bit over the top and there’s a better way of doing it, but it saves all those OR clauses. This shows the segments and numbers when they are active:

Segment Number
Top 0,2,3,5,6,7,8,9
Top Left 0,4,5,6,8,9
Top Right 0,1,2,3,4,7,8,9
Middle 2,3,4,5,6,8,9
Bottom Left 0,2,6,8
Bottom Right 0,1,3,4,5,6,7,8,9
Bottom 0,2,3,5,6,8

All well and good so far. The only bit left to do is calculating the size of the segments in relation to the size of the panel. If you look at the previous diagram of the component:

SegmentArrangements

you can see that the height is three segment widths and two segment lengths. The width is two segment widths and one length. Using this as a basis, you can than say:

Ws = 2.Wp – Hp

Where Ws is the width of the segment, Wp is the width of the panel and Hp is the height of the panel.  Correspondingly:

Ls = 2.Hp – 3.Wp

Where Ls is the length of the segment. You can see that for certain height and width ratios, the width and length of the segments are negative. Therefore:

2.Wp > Hp

and

3.Wp < 2.Hp.

Also:

Ws < Ls

Doing all the maths, we get that 50% < Wp < 60% of Hp. 55% sounds about right. Plugging this into the paint function:

int width = this.getHeight();
width *= 0.55;
int height = this.getHeight();
//set the width as a proportion of the height
this.setSize(width, height);
        
int segmentWidth = 2 * width - height;
int segmentLength = 2 * height - 3 * width;

Testing all this is simple enough. Put it all in a Frame, with a little timer to tick through the numbers:

private LEDPanel led;

public TestFrame(String title){
    //call the superclass constructor with the specified title
    super(title);

    //add the LED panel for testing
    led = new LEDPanel();
    add(led);
    led.setBackground(Color.black);
    led.setForeground(Color.red);
    led.setDisplayNumber(5);

    Timer timer = new Timer();
    timer.schedule(new TimerTask() {
        public void run() {
            led.setDisplayNumber(led.getDisplayNumber() + 1);
        }
    }, 2000, 1000);

    //add the exit listener and set the size
    addWindowListener(new WindowAdapter() {
        public void windowClosing(WindowEvent e) {
            System.exit(0);
        }
    });

    setSize(200, 300);
    //show the frame
    setVisible(true);
}

Now it’s all working tickety-boo, I thought I’d go over the refectoring of the code in the next part.