Saturday, July 12, 2014

An alternative approach of writing JUnit tests (the Jasmine way)

Recently I wrote a lot of Jasmine tests for a small personal project. It took me some time until I finally got the feeling of getting the tests right. After this, I always have a hard time when I am switching back to JUnit tests. For some reason JUnit tests did no longer feel that good and I wondered if it would be possible to write JUnit tests in a way similar to Jasmine tests.

Jasmine is a popular Behavior Driven Development testing framework for JavaScript that is inspired by RSpec (a Ruby BDD testing Framework).

A simple Jasmine test looks like this:
describe('AudioPlayer tests', function() {
  var player;

  beforeEach(function() {
    player = new AudioPlayer();
  });
  
  it('should not play any track after initialization', function() {
    expect(player.isPlaying()).toBeFalsy();
  });
  
  ...
});
The describe() function call in the first line creates a new test suite using the description AudioPlayer tests. Inside a test suite we can use it() to create tests (called specs in Jasmine). Here, we check if the isPlaying() method of AudioPlayer returns false after creating a new AudioPlayer instance.
The same test written in JUnit would look like this:
public class AudioPlayerTest {
  private AudioPlayer audioPlayer;

  @Before 
  public void before() {
    audioPlayer = new AudioPlayer();
  }

  @Test
  void notPlayingAfterInitialization() {
    assertFalse(audioPlayer.isPlaying());
  }
  
  ...
}
Personally I find the Jasmine test much more readable compared to the JUnit version. In Jasmine the only noise that does not contribute anything to the test are the braces and the function keyword. Everything else contains some useful information.
When reading the JUnit test we can ignore keywords like void, access modifiers (private, public, ..), annotations and irrelevant method names (like the name of the method annotated with @Before). In addition to that, test descriptions encoded in camel case method names are not that great to read.

Besides increased readability I really like Jasmine's ability of nesting test suites.
Let's look at an example that is a bit longer:
describe('AudioPlayers tests', function() {
  var player;

  beforeEach(function() {
    player = new AudioPlayer();
  });
  
  describe('when a track is played', function() {
    var track;
  
    beforeEach(function() {
      track = new Track('foo/bar.mp3')
      player.play(track);
    });
    
    it('is playing a track', function() {
      expect(player.isPlaying()).toBeTruthy();
    });
    
    it('returns the track that is currently played', function() {
      expect(player.getCurrentTrack()).toEqual(track);
    });
  });
  
  ...
});
Here we create a sub test suite that is responsible for testing the behavior when a Track is played by AudioPlayer. The inner beforeEach() call is used to set up a common precondition for all tests inside the sub test suite.

In contrast, sharing common preconditions for multiple (but not all) tests in JUnit can become cumbersome sometimes. Of course duplicating the setup code in tests is bad, so we create extra methods for this. To share data between setup and test methods (like the track variable in the example above) we then have to use member variables (with a much larger scope).
Additionally we should make sure to group tests with similar preconditions together to avoid the need of reading the whole test class to find all relevant tests for a certain situation. Or we can split things up into multiple smaller classes. But then we might have to share setup code between these classes...

If we look at Jasmine tests we see that the structure is defined by calling global functions (like describe(), it(), ...) and passing descriptive strings and anonymous functions.

With Java 8 we got Lambdas, so we can do the same right?
Yes, we can write something like this in Java 8:
public class AudioPlayerTest {
  private AudioPlayer player;
  
  public AudioPlayerTest() {
    describe("AudioPlayer tests", () -> {
      beforeEach(() -> {
        player = new AudioPlayer();
      });

      it("should not play any track after initialization", () -> {
        expect(player.isPlaying()).toBeFalsy();
      });
    });
  }
}
If we assume for a moment that describe(), beforeEach(), it() and expect() are statically imported methods that take appropriate parameters, this would at least compile. But how should we run this kind of test?

For interest I tried to integrate this with JUnit and it turned out that this actually very easy (I will write about this in the future). The result so far is a small library called Oleaster.

A test written with Oleaster looks like this:
import static com.mscharhag.oleaster.runner.StaticRunnerSupport.*;
...

@RunWith(OleasterRunner.class)
public class AudioPlayerTest {
  private AudioPlayer player;
  
  {
    describe("AudioPlayer tests", () -> {
      beforeEach(() -> {
        player = new AudioPlayer();
      });
    
      it("should not play any track after initialization", () -> {
        assertFalse(player.isPlaying());
      });
    });
  }
}
Only a few things changed compared to the previous example. Here, the test class is annotated with the JUnit @RunWith annotation. This tells JUnit to use Oleaster when running this test class. The static import of StaticRunnerSupport.* gives direct access to static Oleaster methods like describe() or it(). Also note that the constructor was replaced by an instance initializer and the Jasmine like matcher is replaced with by a standard JUnit assertion.

There is actually one thing that is not so great compared to original Jasmine tests. It is the fact that in Java a variable needs to be effectively final to use it inside a lambda expression. This means that the following piece of code does not compile:
describe("AudioPlayer tests", () -> {
  AudioPlayer player;
  beforeEach(() -> {
    player = new AudioPlayer();
  });
  ...
});
The assignment to player inside the beforeEach() lambda expression will not compile (because player is not effectively final). In Java we have to use instance fields in situations like this (like shown in the example above).

In case you worry about reporting: Oleaster is only responsible for collecting test cases and running them. The whole reporting is still done by JUnit. So Oleaster should cause no problems with tools and libraries that make use of JUnit reports.

For example the following screenshot shows the result of a failed Oleaster test in IntelliJ IDEA:

If you wonder how Oleaster tests look in practice you can have a look at the tests for Oleaster (which are written in Oleaster itself). You can find the GitHub test directory here.

Feel free to add any kind of feedback by commenting to this post or by creating a GitHub issue.

Share this post using Facebook, Twitter or Google+

Sunday, June 22, 2014

Using Markdown syntax in Javadoc comments

In this post we will see how we can write Javadoc comments using Markdown instead of the typical Javadoc syntax.

So what is Markdown?
Markdown is a plain text formatting syntax designed so that it optionally can be converted to HTML using a tool by the same name. Markdown is popularly used to format readme files, for writing messages in online discussion forums or in text editors for the quick creation of rich text documents.
(Wikipedia: Markdown)

Markdown is a very easy to read formatting syntax. Different variations of Markdown can be used on Stack Overflow or GitHub to format user generated content.

Setup
By default the Javadoc tool uses Javadoc comments to generate API documentation in HTML form. This process can be customized used Doclets. Doclets are Java programs that specify the content and format of the output of the Javadoc tool.

The markdown-doclet is a replacement for the standard Java Doclet which gives developers the option to use Markdown syntax in their Javadoc comments. We can set up this doclet in Maven using the maven-javadoc-plugin.
<build>
  <plugins>
    <plugin>
      <artifactId>maven-javadoc-plugin</artifactId>
      <version>2.9</version>
      <configuration>
        <doclet>ch.raffael.doclets.pegdown.PegdownDoclet</doclet>
        <docletArtifact>
          <groupId>ch.raffael.pegdown-doclet</groupId>
          <artifactId>pegdown-doclet</artifactId>
          <version>1.1</version>
        </docletArtifact>
        <useStandardDocletOptions>true</useStandardDocletOptions>
      </configuration>
    </plugin>
  </plugins>
</build>

Writing comments in Markdown
Now we can use Markdown syntax in Javadoc comments:
/**
 * ## Large headline
 * ### Smaller headline
 *
 * This is a comment that contains `code` parts.
 *
 * Code blocks:
 *
 * ```java
 * int foo = 42;
 * System.out.println(foo);
 * ```
 *
 * Quote blocks:
 *
 * > This is a block quote
 *
 * lists:
 *
 *  - first item
 *  - second item
 *  - third item
 *
 * This is a text that contains an [external link][link].
 *
 * [link]: http://external-link.com/
 *
 * @param id the user id
 * @return the user object with the passed `id` or `null` if no user with this `id` is found
 */
public User findUser(long id) {
  ...
}
After running

mvn javadoc:javadoc

we can find the generated HTML API documentation in target/site/apidocs.
The generated documentation for the method shown above looks like this:


As we can see the Javadoc comments get nicely converted to HTML.

Conclusion
Markdown has the clear advantage over standard Javadoc syntax that the source it is far easier to read. Just have a look at some of the method comments of java.util.Map. Many Javadoc comments are full with formatting tags and are barely readable without any tool. But be aware that Markdown can cause problems with tools and IDEs that expect standard Javadoc syntax.

You can find the source of this example project on GitHub.

Share this post using Facebook, Twitter or Google+

Thursday, June 5, 2014

Building a simple RESTful API with Spark

Disclaimer: This post is about the Java micro web framework named Spark and not about the data processing engine Apache Spark.

In this blog post we will see how Spark can be used to build a simple web service. As mentioned in the disclaimer, Spark is a micro web framework for Java inspired by the Ruby framework Sinatra. Spark aims for simplicity and provides only a minimal set of features. However, it provides everything needed to build a web application in a few lines of Java code.

Getting started
Let's assume we have a simple domain class with a few properties and a service that provides some basic CRUD functionality:
public class User {

  private String id;
  private String name;
  private String email;
  
  // getter/setter
}
public class UserService {

  // returns a list of all users
  public List<User> getAllUsers() { .. }
  
  // returns a single user by id
  public User getUser(String id) { .. }

  // creates a new user
  public User createUser(String name, String email) { .. }

  // updates an existing user
  public User updateUser(String id, String name, String email) { .. }
}
We now want to expose the functionality of UserService as a RESTful API (For simplicity we will skip the hypermedia part of REST ;-)). For accessing, creating and updating user objects we want to use following URL patterns:
GET /users Get a list of all users
GET /users/<id> Get a specific user
POST /users Create a new user
PUT /users/<id> Update a user
The returned data should be in JSON format.

To get started with Spark we need the following Maven dependencies:
<dependency>
  <groupId>com.sparkjava</groupId>
  <artifactId>spark-core</artifactId>
  <version>2.0.0</version>
</dependency>
<dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-simple</artifactId>
  <version>1.7.7</version>
</dependency>
Spark uses SLF4J for logging, so we need to a SLF4J binder to see log and error messages. In this example we use the slf4j-simple dependency for this purpose. However, you can also use Log4j or any other binder you like. Having slf4j-simple in the classpath is enough to see log output in the console.
We will also use GSON for generating JSON output and JUnit to write a simple integration tests. You can find these dependencies in the complete pom.xml.

Returning all users
Now it is time to create a class that is responsible for handling incoming requests. We start by implementing the GET /users request that should return a list of all users.
import static spark.Spark.*;

public class UserController {

  public UserController(final UserService userService) {
    
    get("/users", new Route() {
      @Override
      public Object handle(Request request, Response response) {
        // process request
        return userService.getAllUsers();
      }
    });
    
    // more routes
  }
}
Note the static import of spark.Spark.* in the first line. This gives us access to various static methods including get(), post(), put() and more. Within the constructor the get() method is used to register a Route that listens for GET requests on /users. A Route is responsible for processing requests. Whenever a GET /users request is made, the handle() method will be called. Inside handle() we return an object that should be sent to the client (in this case a list of all users).

Spark highly benefits from Java 8 Lambda expressions. Route is a functional interface (it contains only one method), so we can implement it using a Java 8 Lambda expression. Using a Lambda expression the Route definition from above looks like this:
get("/users", (req, res) -> userService.getAllUsers());
To start the application we have to create a simple main() method. Inside main() we create an instance of our service and pass it to our newly created UserController:
public class Main {
  public static void main(String[] args) {
    new UserController(new UserService());
  }
}
If we now run main(), Spark will start an embedded Jetty server that listens on Port 4567. We can test our first route by initiating a GET http://localhost:4567/users request. 
In case the service returns a list with two user objects the response body might look like this:
[com.mscharhag.sparkdemo.User@449c23fd, com.mscharhag.sparkdemo.User@437b26fe]
Obviously this is not the response we want.

Spark uses an interface called ResponseTransformer to convert objects returned by routes to an actual HTTP response. ReponseTransformer looks like this:
public interface ResponseTransformer {
  String render(Object model) throws Exception;
}
ResponseTransformer has a single method that takes an object and returns a String representation of this object. The default implementation of ResponseTransformer simply calls toString() on the passed object (which creates output like shown above).

Since we want to return JSON we have to create a ResponseTransformer that converts the passed objects to JSON. We use a small JsonUtil class with two static methods for this:
public class JsonUtil {

  public static String toJson(Object object) {
    return new Gson().toJson(object);
  }

  public static ResponseTransformer json() {
    return JsonUtil::toJson;
  }
}
toJson() is an universal method that converts an object to JSON using GSON. The second method makes use of Java 8 method references to return a ResponseTransformer instance. ResponseTransformer is again a functional interface, so it can be satisfied by providing an appropriate method implementation (toJson()). So whenever we call json() we get a new ResponseTransformer that makes use of our toJson() method.

In our UserController we can pass a ResponseTransformer as a third argument to Spark's get() method:
import static com.mscharhag.sparkdemo.JsonUtil.*;

public class UserController {
  
  public UserController(final UserService userService) {
    
    get("/users", (req, res) -> userService.getAllUsers(), json());
    
    ...
  }
}
Note again the static import of JsonUtil.* in the first line. This gives us the option to create a new ResponseTransformer by simply calling json().
Our response looks now like this:
[{
  "id": "1866d959-4a52-4409-afc8-4f09896f38b2",
  "name": "john",
  "email": "john@foobar.com"
},{
  "id": "90d965ad-5bdf-455d-9808-c38b72a5181a",
  "name": "anna",
  "email": "anna@foobar.com"
}]
We still have a small problem. The response is returned with the wrong Content-Type. To fix this, we can register a Filter that sets the JSON Content-Type:
after((req, res) -> {
  res.type("application/json");
});
Filter is again a functional interface and can therefore be implemented by a short Lambda expression. After a request is handled by our Route, the filter changes the Content-Type of every response to application/json. We can also use before() instead of after() to register a filter. Then, the Filter would be called before the request is processed by the Route.

The GET /users request should be working now :-)

Returning a specific user
To return a specific user we simply create a new route in our UserController:
get("/users/:id", (req, res) -> {
  String id = req.params(":id");
  User user = userService.getUser(id);
  if (user != null) {
    return user;
  }
  res.status(400);
  return new ResponseError("No user with id '%s' found", id);
}, json());
With req.params(":id") we can obtain the :id path parameter from the URL. We pass this parameter to our service to get the corresponding user object. We assume the service returns null if no user with the passed id is found. In this case, we change the HTTP status code to 400 (Bad Request) and return an error object.

ResponseError is a small helper class we use to convert error messages and exceptions to JSON. It looks like this:
public class ResponseError {
  private String message;

  public ResponseError(String message, String... args) {
    this.message = String.format(message, args);
  }

  public ResponseError(Exception e) {
    this.message = e.getMessage();
  }

  public String getMessage() {
    return this.message;
  }
}
We are now able to query for a single user with a request like this:
GET /users/5f45a4ff-35a7-47e8-b731-4339c84962be
If an user with this id exists we will get a response that looks somehow like this:
{
  "id": "5f45a4ff-35a7-47e8-b731-4339c84962be",
  "name": "john",
  "email": "john@foobar.com"
}
If we use an invalid user id, a ResponseError object will be created and converted to JSON. In this case the response looks like this:
{
  "message": "No user with id 'foo' found"
}

Creating and updating users
Creating and updating users is again very easy. Like returning the list of all users it is done using a single service call:
post("/users", (req, res) -> userService.createUser(
    req.queryParams("name"),
    req.queryParams("email")
), json());

put("/users/:id", (req, res) -> userService.updateUser(
    req.params(":id"),
    req.queryParams("name"),
    req.queryParams("email")
), json());
To register a route for HTTP POST or PUT requests we simply use the static post() and put() methods of Spark. Inside a Route we can access HTTP POST parameters using req.queryParams().
For simplicity reasons (and to show another Spark feature) we do not do any validation inside the routes. Instead we assume that the service will throw an IllegalArgumentException if we pass in invalid values.

Spark gives us the option to register ExceptionHandlers. An ExceptionHandler will be called if an Exception is thrown while processing a route. ExceptionHandler is another single method interface we can implement using a Java 8 Lambda expression:
exception(IllegalArgumentException.class, (e, req, res) -> {
  res.status(400);
  res.body(toJson(new ResponseError(e)));
});
Here we create an ExceptionHandler that is called if an IllegalArgumentException is thrown. The caught Exception object is passed as the first parameter. We set the response code to 400 and add an error message to the response body.

If the service throws an IllegalArgumentException when the email parameter is empty, we might get a response like this:
{
  "message": "Parameter 'email' cannot be empty"
}

The complete source the controller can be found here.

Testing
Because of Spark's simple nature it is very easy to write integration tests for our sample application.
Let's start with this basic JUnit test setup:
public class UserControllerIntegrationTest {

  @BeforeClass
  public static void beforeClass() {
    Main.main(null);
  }

  @AfterClass
  public static void afterClass() {
    Spark.stop();
  }
  
  ...
}
In beforeClass() we start our application by simply running the main() method. After all tests finished we call Spark.stop(). This stops the embedded server that runs our application.

After that we can send HTTP requests within test methods and validate that our application returns the correct response. A simple test that sends a request to create a new user can look like this:
@Test
public void aNewUserShouldBeCreated() {
  TestResponse res = request("POST", "/users?name=john&email=john@foobar.com");
  Map<String, String> json = res.json();
  assertEquals(200, res.status);
  assertEquals("john", json.get("name"));
  assertEquals("john@foobar.com", json.get("email"));
  assertNotNull(json.get("id"));
}
request() and TestResponse are two small self made test utilities. request() sends a HTTP request to the passed URL and returns a TestResponse instance. TestResponse is just a small wrapper around some HTTP response data. The source of request() and TestResponse is included in the complete test class found on GitHub.

Conclusion
Compared to other web frameworks Spark provides only a small amount of features. However, it is so simple you can build small web applications within a few minutes (even if you have not used Spark before). If you want to look into Spark you should clearly use Java 8, which reduces the amount of code you have to write a lot.

You can find the complete source of the sample project on GitHub.

Share this post using Facebook, Twitter or Google+

Saturday, May 24, 2014

Java File I/O Basics

Java 7 introduced the java.nio.file package to provide comprehensive support for file I/O. Besides a lot of other functionality this package includes the Files class (if you already use this class you can stop reading here). Files contains a lot of static methods that can be used to accomplish common tasks when working with files. Unfortunately it looks to me that still a lot of newer (Java 7+) code is written using old (pre Java 7) ways of working with files. This does not have to be bad, but it can make things more complex than needed. A possible reason for this might be that a lot of articles and high rated Stackoverflow answers were written before the release of Java 7.

In the rest of this post I will provide some code samples that show how you can accomplish common file related tasks with Java 7 or newer.

Working with files
// Create directories
// This will create the "bar" directory in "/foo"
// If "/foo" does not exist, it will be created first
Files.createDirectories(Paths.get("/foo/bar"));

// Copy a file
// This copies the file "/foo/bar.txt" to "/foo/baz.txt"
Files.copy(Paths.get("/foo/bar.txt"), Paths.get("/foo/baz.txt"));

// Move a file
// This moves the file "/foo/bar.txt" to "/foo/baz.txt"
Files.move(Paths.get("/foo/bar.txt"), Paths.get("/foo/baz.txt"));

// Delete a file
Files.delete(Paths.get("/foo/bar.txt"));

// Delete a file but do not fail if the file does not exist
Files.deleteIfExists(Paths.get("/foo/bar.txt"));

// Check if a file exists
boolean exists = Files.exists(Paths.get("/foo/bar.txt"));
Most methods of Files take one or more arguments of type Path. Path instances represent a path to a file or directory and can be obtained using Paths.get(). Note that most methods shown here also have an additional varargs parameter that can be used to pass additional options.

For example:
Files.copy(Paths.get("/foo.txt"), Paths.get("/bar.txt"), StandardCopyOption.REPLACE_EXISTING);

Iterating through all files within a directory
Files.walkFileTree(Paths.get("/foo"), new SimpleFileVisitor<Path>() {
  @Override
  public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
    System.out.println("file: " + file);
    return FileVisitResult.CONTINUE;
  }
});
Here the visitFile() method will be called for every file within the /foo directory. You can override additional methods of SimpleFileVisitor if you want to track directories too.

Writing and reading files
// Write lines to file
List<String> lines = Arrays.asList("first", "second", "third");
Files.write(Paths.get("/foo/bar.txt"), lines, Charset.forName("UTF-8"));

// Read lines
List<String> lines = Files.readAllLines(Paths.get("/foo/bar.txt"), Charset.forName("UTF-8"));
The shown methods work with characters. Similar methods are available if you need to work with bytes.

Conclusion
If you didn't know about java.nio.file.Files you should definitely have a look at the Javadoc method summary. There is a lot of useful stuff inside.

Share this post using Facebook, Twitter or Google+

Saturday, May 10, 2014

Grails: The Tomcat kill switch

Some time ago we had some strange effects when running an application in our local development environment. It turned out that the cause of these effects was a feature of the Grails Tomcat plugin.

The actual application consists out of two different Grails applications. To run both Grails applications at the same time on our local machines we configured two different application ports. The first application (let's call it App1) was running on Port 8081 while App2 (the second application) was running on Port 8082.

Now we faced the following effects:
  • If App2 was started before App1 everything worked fine. However, if App1 was started first, then App2 would not start and show a "Port is already in use" error instead. What?!
  • App1 contains links that point to App2. For our local environment these links look like http://localhost:8082/app2/someController. If App2 was not started at all and someone clicked one of these links App1 stopped working. What?!
After some research it turned out, that the reason for this is the TomcatKillSwitch provided by the Tomcat plugin. This class starts a server socket that listens on serverPort + 1. If something is send to this port the embedded tomcat is shut down.

So whenever we started App1 first the kill switch was listening at port 8082 (and therefore we were not able to start App2). Clicking any link that points to App2 triggered the kill switch and the embedded Tomcat of App1 shuts down. If we started App2 first the server socket of the kill switch failed silently on startup of App1 and everything worked as expected. Changing the port of App2 from 8082 to 8083 solved the problem.

Share this post using Facebook, Twitter or Google+

Thursday, May 8, 2014

Grails Controller namespaces

Grails 2.3 introduced controller namespaces. This feature gives Grails developers the option to have multiple controllers that use the same name (within different packages).

It is not that hard to get into a situation where you want two or more controllers with the same name.
Assume we have an application that gives users the option to update their personal profile settings. We might have a ProfileController for this. Now we might also need an administration backend which gives administrators the option to update user profiles. ProfileController would again be a good name for handling these kinds of requests.

With Grails 2.3 we can now do this by adding namespaces to controllers using the static namespace property:
package foo.bar.user

class ProfileController {

  static namespace = 'user'

  // actions that can be accessed by users
}
package foo.bar.admin

class ProfileController {

  static namespace = 'admin'

  // actions that can be accessed by administrators
}
We can now use the namespace to map the controllers to different URLs within UrlMappings.groovy:
class UrlMappings {

  static mappings = { 
    '/profile' { 
      controller = 'profile' 
      namespace = 'user' 
    }
    '/admin/profile' { 
      controller = 'profile' 
      namespace = 'admin' 
    }
    ..
  }
}
To make the namespace part of the URL by default we can use the $namespace variable:
static mappings = { 
  "/$namespace/$controller/$action?"() 
}
Using this way we are able to access our controllers with the following URLs:

/user/profile/<action>
/admin/profile/<action>

Please note that we also need to provide the namespace when building links:
<g:link controller="profile" namespace="admin">Profile admin functions</g:link>


Share this post using Facebook, Twitter or Google+