mscharhag, Programming and Stuff;

A blog about programming and software development topics, mostly focused on Java technologies including Java EE, Spring and Grails.

  • Sunday, 23 February, 2020

    Composing custom annotations with Spring

    Java Annotations were introduced with Java 5 back in 2004 as a way to add meta data into Java source code. Today many major frameworks like Spring or Hibernate heavily rely on annotations.

    In this post we will have a look at a very useful Spring feature which allows us to create our own annotations based on one or more Spring annotations.

    Composing a custom annotation

    Assume we have a set of Spring annotations we often use together. A common example is the combination of @Service and @Transactional:

    @Transactional(rollbackFor = Exception.class, timeout = 5)
    public class UserService {

    Instead of repeating both annotations over and over again, we can create our own annotation containing these two Spring annotations. Creating our own annotation is very easy and looks like this:

    @Transactional(rollbackFor = Exception.class, timeout = 5)
    public @interface MyService {}

    An annotation is defined with the @interface keyword (instead of class or interface). The standard Java Annotation @Retention is used to indicate that the annotation should be processable at runtime. We also added both Spring annotations to our annotation.

    Now we can use our own @MyService annotations to annotate our services:

    public class UserService {

    Spring now detects that @MyService is annotated with @Service and @Transactional and provides the same behaviour as the previous example with both annotations present at the UserService class.

    Note that this is a feature of Spring's way of annotation processing and not a general Java feature. Annotations of other frameworks and libraries might not work if you add them to your own annotation.

    Example use cases

    Custom annotations can be used in various situations to improve the readability of our code. Here are two other examples that might come in handy.

    Maybe we need a property value in various locations of our code. Properties are often injected using Spring's @Value annotation:

    // injects configuration properties my.api.key
    private String apiKey;

    In such a situation we can move the property expression out of our code into a separate annotation:

    public @interface ApiKey {}

    Within our code we can now use @ApiKey instead of repeating the property expression everywhere:

    private String apiKey;

    Another example are integration tests. Within tests often various Spring annotations are used to define the test setup. These annotations can be grouped together using a custom annotation. For example, we can create a @MockMvcTest annotations that defines the Spring setup for mock mvc tests:

    @AutoConfigureMockMvc(secure = false)
    @TestPropertySource(locations = "")
    public @interface MockMvcTest {}

    The definition of our tests look a lot cleaner now. We just have to add @MockMvcTest to get the complete test setup:

    public class MyTest {

    Note that our @MockMvcTest annotation also contains the @ExtendWith annotation of JUnit 5. Like Spring, JUnit 5 is also able to detect this annotation if it is added to your own custom annotation. Be aware that this will not work if you are still using JUnit 4. With JUnit 4 you have to use @RunWith instead of @ExtendWith. Unfortunatelly @RunWith only works when placed directly at the test class.

    Examples in Spring

    Spring uses this feature in various situations to define shortcuts for common annotations.

    Here are a few examples:

    • @GetMapping is the short version for @RequestMapping(method = {RequestMethod.GET}).
    • @RestController is a composition of @Controller and @ResponseBody.
    • @SpringBootApplication is a shortcut for @SpringBootConfiguration, @EnableAutoConfiguration and @ComponentScan

    You can verify this yourself by looking into the definition of these annotations in Spring's source code.

  • Wednesday, 12 February, 2020

    REST / HTTP methods: POST vs. PUT vs. PATCH

    Each HTTP request consists of a method (sometimes called verb) that indicates the action to be performed on the identified resource.

    When building RESTful Web-Services the HTTP method POST is typically used for resource creation while PUT is used for resource updates. While this is fine in most cases it can be also viable to use PUT for resource creation. PATCH is an alternative for resource updates as it allows partial updates.

    In general we can say:

    • POST requests create child resources at a server defined URI. POST is also used as general processing operation
    • PUT requests create or replace the resource at the client defined URI
    • PATCH requests update parts of the resource at the client defined URI

    But let's look a bit more into details and see how these verbs are defined in the HTTP specification. The relevant part here is section 9 of the HTTP RFC (2616).


    The RFC describes the function of POST as:

    The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line.

    This allows the client to create resources without knowing the URI for the new resource. For example, we can send a POST request to /projects to create a new project. The server can now create the project as a new subordinate of /project, for example: /projects/123. So when using POST for resource creation the server can decide the URI (and typically the ID) of the newly created resources.

    When the server created a resource, it should respond with the 201 (Created) status code and a Location header that points to the newly created resource.

    For example:


    POST /projects HTTP/1.1
    Content-Type: application/json
        "name": "my cool project",


    HTTP/1.1 201 Created

    POST is not idempotent. So sending the same POST requests multiple times can result in the creation of multiple resources. Depending on your needs this might be a useful feature. If not, you should have some validation in place and make sure a resource is only created once based on some custom criteria (e.g. the project name has to be unique).

    The RFC also tells us:

    The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.

    This means that POST does not necessarily need to create resources. It can also be used to perform a generic action (e.g. starting a batch job, importing data or process something).


    The main difference between POST and PUT is a different meaning of the request URI. The HTTP RFC says:

    The URI in a POST request identifies the resource that will handle the enclosed entity. [..] In contrast, the URI in a PUT request identifies the entity enclosed with the request [..] and the server MUST NOT attempt to apply the request to some other resource.

    For PUT requests the client needs to know the exact URI of the resource. We cannot send a PUT request to /projects and expect a new resource to be created at /projects/123. Instead, we have to send the PUT request directly to /projects/123. So if we want to create resources with PUT, the client needs to know (how to generate) the URI / ID of the new resource.

    In situations where the client is able to generate the resource URI / ID for new resources, PUT should actually be preferred over POST. In these cases the resource creation is typically idempotent, which is a clear hint towards PUT.

    It is fine to use PUT for creation and updating resources. So sending a PUT request to /projects/123 might create the project if it does not exist or replace the existing project. HTTP status codes should be used to inform the client if the resource has been created or updated.

    The HTTP RFC tells us:

    If a new resource is created, the origin server MUST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request.

    Generally speaking, if the exact resource URI is known and the operation is idemponent, PUT is typically a better choice than POST. In most situations this makes PUT a good choice for update requests.

    However, there is one quirk that should be remembered for resource updates. According to the RFC, PUT should replace the existing resource with the new one. This means we cannot do partial updates. So, if we want to update a single field of the resource, we have to send a PUT request containing the complete resource.


    The HTTP PATCH method is defined in RFC 5789 as an extension to the earlier mentioned HTTP RFC. While PUT is used to replace an existing resource, PATCH is used to apply partial modifications to a resource.

    Quoting the RFC:

    With PATCH, [..], the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.  The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources;

    So PATCH, similar to POST, might also affect resources other than the one identified by the Request URI.

    Often PATCH requests use the same format as the resource that should be updated and just omit the fields that should not change. However, it does not have to be this way. It is also fine to use a separate patch format, which describes how the resource should be modified.

    PATCH is neither safe nor idempotent.

    Maybe you are wondering in which situations a partial resource update is not idempotent. A simple example here is the addition of an item to an existing list resource, like adding a product to a shopping cart. Multiple (partial) update requests might add the product multiple times to the shopping cart.

  • Sunday, 9 February, 2020

    HTTP methods: Idempotency and Safety

    Idempotency and safety are properties of HTTP methods. The HTTP RFC defines these properties and tells us which HTTP methods are safe and idempotent. Server application should make sure to implement the safe and idempotent semantic correctly as clients might expect it.

    Safe HTTP methods

    HTTP methods are considered safe if they do not alter the server state. So safe methods can only be used for read-only operations. The HTTP RFC defines the following methods to be safe: GET, HEAD, OPTIONS and TRACE.

    In practice it is often not possible to implement safe methods in a way they do not alter any server state.

    For example, a GET request might create log or audit messages, update statistic values or trigger a cache refresh on the server.

    The RFC tells us here:

    Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

    Idempotent HTTP methods

    Idempotency means that multiple identical requests will have the same outcome. So it does not matter if a request is sent once or multiple times. The following HTTP methods are idempotent: GET, HEAD, OPTIONS, TRACE, PUT and DELETE. All safe HTTP methods are idempotent but PUT and DELETE are idempotent but not safe.

    Note that idempotency does not mean that the server has to respond in the same way on each request.

    For example, assume we want to delete a project by an ID using a DELETE request:

    DELETE /projects/123 HTTP/1.1

    As response we might get an HTTP 200 status code indicating that the project has been deleted successfully. If we send this DELETE request again, we might get an HTTP 404 as response because the project has already been deleted. The second request did not alter the server state so the DELETE operation is idempotent even if we get a different response.

    Idempotency is a positive feature of an API because it can make an API more fault-tolerant. Assume there is an issue on the client and requests are send multiple times. As long as idempotent operations are used this will cause no problems on the server side.

    HTTP method overview

    The following table summarizes which HTTP methods are safe and idempotent:

    HTTP Method Safe Idempotent
    GET Yes Yes
    HEAD Yes Yes
    OPTIONS Yes Yes
    TRACE Yes Yes
    PUT No Yes
    DELETE No Yes
    POST No No
    PATCH No No



  • Saturday, 1 February, 2020

    Validating code and architecture constraints with ArchUnit


    ArchUnit is a library for checking Java code against a set of self defined code and architecture constraints. These constraints can be defined in a fluent Java API within unit tests. ArchUnit can be used to validate dependencies between classes or layers, to check for cyclic dependencies and much more. In this post we will create some example rules to see how we can benefit from ArchUnit.

    Required dependency

    To use ArchUnit we need to add the following dependency to our project:


    If you are still using JUnit 4 you should use the archunit-junit4 artifact instead.

    Creating the first ArchUnit rule

    Now we can start creating our first ArchUnit rule. For this we create a new class in our test folder:

    @RunWith(ArchUnitRunner.class) //only for JUnit 4, not needed with JUnit 5
    @AnalyzeClasses(packages = "com.mscharhag.archunit")
    public class ArchUnitTest {
        // verify that classes whose name name ends with "Service" should be located in a "service" package
        private final ArchRule services_are_located_in_service_package = classes()

    With @AnalyzeClasses we tell ArchUnit which Java packages should be analyzed. If you are using JUnit 4 you also need to add the ArchUnit JUnit runner.

    Inside the class we create a field and annotate it with @ArchTest. This is our first test.

    We can define the constraint we want to validate by using ArchUnits fluent Java API. In this example we want to validate that all classes whose name ends with Service (e.g. UserService) are located in a package named service (e.g.

    Most ArchUnit rules start with a selector that indicates what type of code units should be validated (classes, methods, fields, etc.). Here, we use the static method classes() to select classes. We restrict the selection to a subset of classes using the that() method (here we only select classes whose name ends with Service). With the should() method we define the constraint that should be matched against the selected classes (here: the classes should reside in a service package).

    When running this test class all tests annotated with @ArchTest will be executed. The test will fail, if ArchUnits detects service classes outside a service package.

    More examples

    Let's look at some more examples.

    We can use ArchUnit to make sure that all Logger fields are private, static and final:

    // verify that logger fields are private, static and final
    private final ArchRule loggers_should_be_private_static_final = fields()

    Here we select fields of type Logger and define multiple constraints in one rule.

    Or we can make sure that methods in utility classes have to be static:

    // methods in classes whose name ends with "Util" should be static
    static final ArchRule utility_methods_should_be_static = methods()

    To enforce that packages named impl contain no interfaces we can use the following rule:

    // verify that interfaces are not located in implementation packages
    static final ArchRule interfaces_should_not_be_placed_in_impl_packages = noClasses()

    Note that we use noClasses() instead of classes() to negate the should constraint.

    (Personally I think this rule would be much easier to read if we could define the rule as interfaces().should().notResideInAPackage("..impl.."). Unfortunately ArchUnit provides no interfaces() method)

    Or maybe we are using the Java Persistence API and want to make sure that EntityManager is only used in repository classes:

    static final ArchRule only_repositories_should_use_entityManager = noClasses()

    Layered architecture example

    ArchUnit also comes with some utilities to validate specific architecture styles.

    For example can we use layeredArchitecture() to validate access rules for layers in a layered architecture:

    static final ArchRule layer_dependencies_are_respected = layeredArchitecture()

    Here we define three layers: Controllers, Services and Repositories. The repository layer may only accessed by the service layer while the service layer may only be accessed by controllers.

    Shortcuts for common rules

    To avoid that we have to define all rules our self, ArchUnit comes with a set of common rules defined as static constants. If these rules fit our needs, we can simply assign them to @ArchTest fields in our test.

    For example we can use the predefined NO_CLASSES_SHOULD_THROW_GENERIC_EXCEPTIONS rule if we make sure no exceptions of type Exception and RuntimeException are thrown:

    private final ArchRule no_generic_exceptions = NO_CLASSES_SHOULD_THROW_GENERIC_EXCEPTIONS;


    ArchUnit is a powerful tool to validate a code base against a set of self defined rules. Some of the examples we have seen are also reported by common static code analysis tools like FindBugs or SonarQube. However, these tools are typically harder to extend with your own project specific rules and this is where ArchUnit comes in.

    As always you can find the Sources from the examples on GitHub. If you are interested in ArchUnit you should also check the comprehensive user guide.

  • Thursday, 23 January, 2020

    Creating an API Gateway with Zuul and Spring Boot


    When working with micro services it is common to have unified access-point to your system (also called API Gateway). Consumers only talk with the API Gateway and not with the services directly. This hides the fact that your system is composed out of multiple smaller services. The API Gateway also helps solving common challenges like authentication, managing cross-origin resource sharing (CORS) or request throttling.

    Zuul is a JVM-based API Gateway developed and open-sourced by Netflix. In this post we will create a small Spring application that includes a zuul proxy for routing requests to other services.

    Enabling zuul proxy

    To use zuul in a project we have to add the spring-cloud-starter-netflix-zuul dependency. If we want to use the spring zuul actuator endpoint (more on this later), we also need to add the spring-boot-starter-actuator dependency.

    <!-- optional -->

    Next we have to enable the zuul proxy using @EnableZuulProxy in our spring boot application class (or any other spring @Configuration class)

    public class ZuulDemoApplication {

    Now we can start configuring our routes.

    Configuring routes

    Routes describe how incoming requests should be routed by zuul. To configure zuul routes we only have to add a few lines to our spring boot application.yml (or file:


          path: /users/**
          path: /projects/**

    Here we define the routes for two endpoints: /users and /projects: Requests to /users will be routed to while requests to /projects are routed to

    Assume we start this example application locally and send a GET request to http://localhost:8080/users/john. This request matches the zuul route /users/** so zuul will forward the request to

    When using a service registry (like Eureka) we can alternatively configure a service id instead of an url:

          path: /users/**
          serviceId: user_service

    Another useful option is sensitiveHeaders, which allows us to remove headers before the request is routed to another service. This can be used to avoid leaking of sensitive headers into external servers (e.g. security tokens or session ids).

          path: /users/**
          sensitiveHeaders: Cookie,Set-Cookie,Authorization

    Note that the shown example headers (Cookie,Set-Cookie,Authorization) are the default value of the sensitiveHeaders property. So these headers will not be passed, even if sensitiveHeaders is not specified.

    Request / Response modification with filters

    We can customize zuul routing using filters. To create a zuul filter we create a new spring bean (marked with @Component) which extends from ZuulFilter:

    public class MyFilter extends ZuulFilter {
        public String filterType() {
            return FilterConstants.PRE_TYPE;
        public int filterOrder() {
            return FilterConstants.PRE_DECORATION_FILTER_ORDER - 1;
        public boolean shouldFilter() {
            return true;
        public Object run() {
            RequestContext context = RequestContext.getCurrentContext();
            context.addZuulRequestHeader("my-auth-token", "s3cret");
            return null;

    ZuulFilter requires the definition of four methods:

    • Within filterType() we define that our filter should run before (PRE_TYPE) the actual routing. If we want to modify the response of the service before it is send back to the client, we can return POST_TYPE here.
    • With filterOrder() we can influence to order of filter execution
    • shouldFilter() indicates if this filter should be executed (= calling the run() method)
    • in run() we define the actual filter logic. Here we add a simple header named my-auth-token to the request that is routed to another service.

    Filters allow us to modify the request before it is send to the specified service or to modify the response of the service before it is send back to the client.

    Actuator endpoint

    Spring cloud zuul exposed an additional Spring Boot actuator endpoint. To use this feature we need to have spring-boot-starter-actuator in the classpath.

    By default the actuator endpoint is disabled. Within application.yml we enable specific actuator endpoints using the management.endpoints.web.exposure.include property:

            include: '*'

    Here we simply enable all actuator endpoints. More detailed configuration options can be found in the Spring Boot actuator documentation.

    After enabling the zuul actuator endpoint we can send a GET request to http://localhost:8080/actuator/routes to get a list of all configured routes.

    An example response might look like this:



    With Spring cloud you can easliy integrate a zuul proxy in your application. This allows you the configuration of routes in .yml or .properties files. Routing behaviour can be customized with Filters.

    More details on spring's support for zuul can be found in the official spring cloud zuul documentation. As always you can find the examples shown in this post on GitHub.