mscharhag, Programming and Stuff;

A blog about programming and software development topics, mostly focused on Java technologies including Java EE, Spring and Grails.

  • Thursday, 8 April, 2021

    Looking into the JDK 16 vector API

    JDK 16 comes with the incubator module jdk.incubator.vector (JEP 338) which provides a portable API for expressing vector computations. In this post we will have a quick look at this new API.

    Note that the API is in incubator status and likely to change in future releases.

    Why vector operations?

    When supported by the underlying hardware vector operations can increase the number of computations performed in a single CPU cycle.

    Assume we want to add two vectors each containing a sequence of four integer values. Vector hardware allows us to perform this operation (four integer additions in total) in a single CPU cycle. Ordinary additions would only perform one integer addition in the same time.

    The new vector API allows us to define vector operations in a platform agnostic way. These operations then compile to vector hardware instructions at runtime.

    Note that HotSpot already supports auto-vectorization which can transform scalar operations into vector hardware instructions. However, this approach is quite limited and utilizes only a small set of available vector hardware instructions.

    A few example domains that might benefit from the new vector API are machine learning, linear algebra or cryptography.

    Enabling the vector incubator module (jdk.incubator.vector)

    To use the new vector API we need to use JDK 16 (or newer). We also need to add the jdk.incubator.vector module to our project. This can be done with a module-info.java file:

    module com.mscharhag.vectorapi {
        requires jdk.incubator.vector;
    }

    Implementing a simple vector operation

    Let's start with a simple example:

    float[] a = new float[] {1f, 2f, 3f, 4f};
    float[] b = new float[] {5f, 8f, 10f, 12f};
    
    FloatVector first = FloatVector.fromArray(FloatVector.SPECIES_128, a, 0);
    FloatVector second = FloatVector.fromArray(FloatVector.SPECIES_128, b, 0);
    
    FloatVector result = first
            .add(second)
            .pow(2)
            .neg();

    We start with two float arrays (a and b) each containing four elements. These provide the input data for our vectors.

    Next we create two FloatVectors using the static fromArray(..) factory method. The first parameter defines the size of the vector in bits (here 128). Using the last parameter we are able to define an offset value for the passed arrays (here we use 0)

    In Java a float value has a size of four bytes (= 32 bits). So, four float values match exactly the size of our vector (128 bits).

    After that, we can define our vector operations. In this example we add both vectors together, then we square and negate the result.

    The resulting vector contains the values:

    [-36.0, -100.0, -169.0, -256.0]

    We can write the resulting vector into an array using the intoArray(..) method:

    float[] resultArray = new float[4];
    result.intoArray(resultArray, 0);

    In this example we use FloatVector to define operations on float values. Of course we can use other numeric types too. Vector classes are available for byte, short, integer, float and double (ByteVector, ShortVector, etc.).

    Working with loops

    While the previous example was simple to understand it does not show a typical use case of the new vector API. To gain any benefits from vector operations we usually need to process larger amounts of data.

    In the following example we start with three arrays a, b and c, each having 10000 elements. We want to add the values of a and b and store it in c: c[i] = a[i] + b[i].

    Our code looks like this:

    final VectorSpecies<Float> SPECIES = FloatVector.SPECIES_128;
    
    float[] a = randomFloatArray(10_000);
    float[] b = randomFloatArray(10_000);
    float[] c = new float[10_000];
    
    for (int i = 0; i < a.length; i += SPECIES.length()) {
        VectorMask<Float> mask = SPECIES.indexInRange(i, a.length);
        FloatVector first = FloatVector.fromArray(SPECIES, a, i, mask);
        FloatVector second = FloatVector.fromArray(SPECIES, b, i, mask);
        first.add(second).intoArray(c, i, mask);
    }

    Here we iterate over the input arrays in strides of vector length. A VectorMask helps us if vectors cannot be completely filled from input data (e.g. during the last loop iteration).

    Summary

    We can use the new vector API to define vector operations for optimizing computations for vector hardware. This way we can increase the number of computations performed in a single CPU cycle. Central element of the vector API are type specific vector classes like FloatVector or LongVector.

    You can find the example source code on GitHub.

  • Monday, 8 March, 2021

    Kotlin dependency injection with Koin

    Dependency injection is a common technique in today's software design. With dependency injection we pass dependencies to a component instead of creating it inside the component. This way we can separate construction and use of dependencies.

    In this post we will look at Koin, a lightweight Kotlin dependency injection library. Koin describes itself as a DSL, a light container and a pragmatic API.

    Getting started with Koin

    We start with adding the Koin dependency to our project:

    <dependency>
        <groupId>org.koin</groupId>
        <artifactId>koin-core</artifactId>
        <version>2.2.2</version>
    </dependency>

    Koin artifacts are available on jcenter.bintray.com. If not already available you can add this repository with:

    <repositories>
        <repository>
            <id>central</id>
            <name>bintray</name>
            <url>https://jcenter.bintray.com</url>
        </repository>
    </repositories>
    

    Or if you are using Gradle:

    repositories {
        jcenter()    
    }
    
    dependencies {
        compile "org.koin:koin-core:2.2.2"
    }
    

    Now let's create a simple UserService class with a dependency to an AddressValidator object:

    class UserService(
        private val addressValidator: AddressValidator
    ) {
        fun createUser(username: String, address: Address) {
            // use addressValidator to validate address before creating user
        }
    }

    AddressValidator simply looks like this:

    class AddressValidator {
        fun validate(address: Address): Boolean {
            // validate address
        }
    }

    Next we will use Koin to wire both components together. We do this by creating a Koin module:

    val myModule = module {
        single { AddressValidator() }
        single(createdAtStart = true) { UserService(get()) }
    }

    This creates a module with two singletons (defined by the single function). single accepts a lambda expression as parameter that is used to create the component. Here, we simply call the constructors of our previously defined classes.

    With get() we can resolve dependencies from a Koin module. In this example we use get() to obtain the previously defined AddressValidator instance and pass it to the UserService constructor.

    The createdAtStart option tells Koin to create this instance (and its dependencies) when the Koin application is started.

    We start a Koin application with:

    val app = startKoin {
        modules(myModule)
    }

    startKoin launches the Koin container which loads and initializes dependencies. One or more Koin modules can be passed to the startKoin function. A KoinApplication object is returned.

    Retrieving objects from the Koin container

    Sometimes it necessary to retrieve objects from the Koin dependency container. This can be done by using the KoinApplication object returned by the startKoin function:

    // retrieve UserService instance from previously defined module
    val userService = app.koin.get<UserService>()

    Another approach is to use the KoinComponent interface. KoinComponent provides an inject method we use to retrieve objects from the Koin container. For example:

    class MyApp : KoinComponent {
       
        private val userService by inject<UserService>()
    
        ...
    }

    Factories

    Sometimes object creation is not as simple as just calling a constructor. In this case, a factory method can come in handy. Koin's usage of lambda expressions for object creation support us here. We can simply call factory functions from the lambda expression.

    For example, assume the creation of a UserService instance is more complex. We can come up with something like this:

    val myModule = module {
    
        fun provideUserService(addressValidator: AddressValidator): UserService {
            val userService = UserService(addressValidator)
            // more code to configure userService
            return userService
        }
    
        single { AddressValidator() }
        single { provideUserService(get()) }
    }

    As mentioned earlier, single is used to create singletons. This means Koin creates only one object instance that is then shared by other objects.

    However, sometimes we need a new object instance for every dependency. In this case, the factory function helps us:

    val myModule = module {
        factory { AddressValidator() }
        single { UserService(get()) }
        single { OtherService(get()) } // OtherService constructor takes an AddressValidator instance
    }

    With factory Koin creates a new AddressValidator objects whenever an AddressValidator is needed. Here, UserService and OtherService get two different AddressValidator instances via get().

    Providing interface implementations

    Let's assume AddressValidator is an interface that is implemented by AddressValidatorImpl. We can still write our Koin module like this:

    val myModule = module {
        single { AddressValidatorImpl() }
        single { UserService(get()) }
    }

    This defines a AddressValidatorImpl instance that can be injected to other components. However, it is likely that AddressValidatorImpl should only expose the AddressValidator interface. This way we can enforce that other components only depend on AddressValidator and not on a specific interface implementation. We can accomplish this by adding a generic type to the single function:

    val myModule = module {
        single<AddressValidator> { AddressValidatorImpl() }
        single { UserService(get()) }
    }

    This way we expose only the AddressValidator interface by creating a AddressValidatorImpl instance.

    Properties and configuration

    Obtaining properties from a configuration file is a common task. Koin supports loading property files and giving us the option to inject properties.

    First we need to tell Koin to load properties which is done by using the fileProperties function. fileProperties has an optional fileName argument we can use to specify a path to a property file. If no argument is given Koin tries to load koin.properties from the classpath.

    For example:

    val app = startKoin {
       
        // loads properties from koin.properties
        fileProperties()
        
        // loads properties from custom property file
        fileProperties("/other.properties")
        
        modules(myModule)
    }

    Assume we have a component that requires some configuration property:

    class ConfigurableComponent(val someProperty: String)

    .. and a koin.properties file with a single entry:

    foo.bar=baz

    We can now retrieve this property and inject it to ConfigurableComponent by using the getProperty function:

    val myModule = module {
        single { ConfigurableComponent(getProperty("foo.bar")) }
    }

    Summary

    Koin is an easy to use dependency injection container for Kotlin. Koin provides a simple DSL to define components and injection rules. We use this DSL to create Koin modules which are then used to initialize the dependency injection container. Koin is also able to inject properties loaded from files.

    For more information you should visit the Koin documentation page. You can find the sources for this post on GitHub.

  • Thursday, 18 February, 2021

    REST API Design: Dealing with concurrent updates

    Concurrency control can be an important part of a REST API, especially if you expect concurrent update requests for the same resource. In this post we will look at different options to avoid lost updates over HTTP.

    Let's start with an example request flow, to understand the problem:

    We start with Alice and Bob requesting the resource /articles/123 from the server which responds with the current resource state. Then, Bob executes an update request based on the previously received data. Shorty after that, Alice also executes an update request. Alice's request is also based on the previously received resource and does not include the changes made by Bob. After the server finished processing Alice's update Bob's changes have been lost.

    HTTP provides a solution for this problem: Conditional requests, defined in RFC 7232.

    Conditional requests use validators and preconditions defined in specific headers. Validators are metadata generated by the server that can be used to define preconditions. For example, last modification dates or ETags are validators that can be used for preconditions. Based on those preconditions the server can decide if an update request should be executed.

    For state changing requests the If-Unmodified-Since and If-Match headers are particularly interesting. We will learn how to avoid concurrent updates using those headers in the next sections.

    Using a last modification date with an If-Unmodified-Since header

    Probably the easiest way to avoid lost updates is the use of a last modification date. Saving the date of last modification for a resource is often a good idea so it is likely we already have this value in our database. If this is not the case, it is often very easy to add.

    When returning a response to the client we can now add the last modification date in the Last-Modified response header. The Last-Modified header uses the following format:

    <day-name>, <day> <month-name> <year> <hour>:<minute>:<second> GMT

    For example:

    Request:

    GET /articles/123

    Response:

    HTTP/1.1 200 OK
    Last-Modified: Sat, 13 Feb 2021 12:34:56 GMT
    
    {
        "title": "Sunny summer",
        "text": "bla bla ..."
    }

    To update this resource the client now has to add the If-Unmodified-Since header to the request. The value of this header is set to the last modification date retrieved from the previous GET request.

    Example update request:

    PUT /articles/123
    If-Unmodified-Since: Sat, 13 Feb 2021 12:34:56 GMT
    
    {
        "title": "Sunny winter",
        "text": "bla bla ..."
    }

    Before executing the update, the server has to compare the last modification date of the resource with the value from the If-Unmodified-Since header. The update is only executed if both values are identical.

    One might argue that it is enough to check if the last modification date of the resource is newer than the value of the If-Unmodified-Since header. However, this gives clients the option to overrule other concurrent requests by sending a modified last modification date (e.g. a future date).

    A problem with this approach is that the precision of the Last-Modified header is limited to seconds. If multiple concurrent update requests are executed in the same second, we can still run into the lost update problem.

    Using an ETag with an If-Match header

    Another approach is the use of an entity tag (ETag). ETags are opaque strings generated by the server for the requested resource representation. For example, the hash of the resource representation can be used as ETag.

    ETags are sent to the client using the ETag Header. For example:

    Request:

    GET /articles/123

    Response:

    HTTP/1.1 200 OK
    ETag: "a915ecb02a9136f8cfc0c2c5b2129c4b"
    
    {
        "title": "Sunny summer",
        "text": "bla bla ..."
    }

    When updating the resource, the client sends the ETag header back to the server:

    PUT /articles/123
    ETag: "a915ecb02a9136f8cfc0c2c5b2129c4b"
    
    {
        "title": "Sunny winter",
        "text": "bla bla ..."
    }

    The server now verifies that the ETag header matches the current representation of the resource. If the ETag does not match, the resource state on the server has been changed between GET and PUT requests.

    Strong and weak validation

    RFC 7232 differentiates between weak and strong validation:

    Weak validators are easy to generate but are far less useful for comparisons. Strong validators are ideal for comparisons but can be very difficult (and occasionally impossible) to generate efficiently.

    Strong validators change whenever a resource representation changes. In contrast weak validators do not change every time the resource representation changes.

    ETags can be generated in weak and strong variants. Weak ETags must be prefixed by W/.

    Here are a few example ETags:

    Weak ETags:

    ETag: W/"abcd"
    ETag: W/"123"

    Strong ETags:

    ETag: "a915ecb02a9136f8cfc0c2c5b2129c4b"
    ETag: "ngl7Kfe73Mta"

    Besides concurrency control, preconditions are often used for caching and bandwidth reduction. In these situations weak validators can be good enough. For concurrency control in REST APIs strong validators are usually preferable.

    Note that using Last-Modified and If-Unmodified-Since headers is considered weak because of the limited precision. We cannot be sure that the server state has been changed by another request in the same second. However, it depends on the number of concurrent update requests you expect if this is an actual problem.

    Computing ETags

    Strong ETags have to be unique for all versions of all representations for a particular resource. For example, JSON and XML representations of the same resource should have different ETags.

    Generating and validating strong ETags can be a bit tricky. For example, assume we generate an ETag by hashing a JSON representation of a resource before sending it to the client. To validate the ETag for an update request we now have to load the resource, convert it to JSON and then hash the JSON representation.

    In the best case resources contain an implementation-specific field that tracks changes. This can be a precise last modification date or some form of internal revision number. For example, when using database frameworks like Java Persistence API (JPA) with optimistic locking we might already have a version field that increases with every change.

    We can then compute an ETag by hashing the resource id, the media-type (e.g. application/json) together with the last modification date or the revision number.

    HTTP status codes and execution order

    When working with preconditions, two HTTP status codes are relevant:

    • 412 - Precondition failed indicates that one or more preconditions evaluated to false on the server (e.g. because the resource state has been changed on the server)
    • 428 - Precondition required has been added in RFC 6585 and indicates that the server requires the request to be conditional. The server should return this status code if an update request does not contain a expected preconditions

    RFC 7232 also defines the evaluation order for HTTP 412 (Precondition failed):

    [..] a recipient cache or origin server MUST evaluate received request preconditions after it has successfully performed its normal request checks and just before it would perform the action associated with the request method.  A server MUST ignore all received preconditions if its response to the same request without those conditions would have been a status code other than a 2xx (Successful) or 412 (Precondition Failed).  In other words, redirects and failures take precedence over the evaluation of preconditions in conditional requests.

    This usually results in the following processing order of an update request:

    Before evaluating preconditions, we check if the request fulfills all other requirements. When this is not the case, we respond with a standard 4xx status code. This way we make sure that other errors are not suppressed by the 412 status code.

     

    Interested in more REST related articles? Have a look at my REST API design page.

     

  • Tuesday, 2 February, 2021

    Validation in Spring Boot applications

    Validation in Spring Boot applications can be done in many different ways. Depending on your requirements some ways might fit better to your application than others. In this post we will explore the usual options to validate data in Spring Boot applications.

    Validation is done by using the Bean Validation API. The reference implementation for the Bean Validation API is Hibernate Validator.

    All required dependencies are packaged in the Spring Boot starter POM spring-boot-starter-validation. So usually all you need to get started is the following dependency:

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-validation</artifactId>
    </dependency>
    

    Validation constraints are defined by annotating fields with appropriate Bean Validation annotations. For example:

    public class Address {
    
        @NotBlank
        @Size(max = 50)
        private String street;
    
        @NotBlank
        @Size(max = 50)
        private String city;
    
        @NotBlank
        @Size(max = 10)
        private String zipCode;
        
        @NotBlank
        @Size(max = 3)
        private String countryCOde;
    
        // getters + setters
    }

    I think these annotations are quite self-explanatory. We will use this Address class in many of the following examples.

    You can find a complete list of build in constraint annotations in the Bean Validation documentation. Of course you can also define you own validation constraints by creating a custom ConstraintValidator.

    Defining validation constraints is only one part. Next we need to trigger the actual validation. This can be done by Spring or by manually invoking a Validator. We will see both approaches in the next sections.

    Validating incoming request data

    When building a REST API with Spring Boot it is likely you want to validate incoming request data. This can be done by simply adding the @Valid Annotation to the @RequestBody method parameter. For example:

    @RestController
    public class AddressController {
    
        @PostMapping("/address")
        public void createAddress(@Valid @RequestBody Address address) {
            // ..
        }
    }

    Spring now automatically validates the passed Address object based on the previously defined constraints.

    This type of validation is usually used to make sure the data sent by the client is syntactically correct. If the validation fails the controller method is not called and a HTTP 400 (Bad request) response is returned to the client. More complex business specific validation constraints should typically be checked later in the business layer.

    Persistence layer validation

    When using a relational database in your Spring Boot application, it is likely that you are also using Spring Data and Hibernate. Hibernate comes with supports for Bean Validation. If your entities contain Bean Validation annotations, those are automatically checked when persisting an entity.

    Note that the persistence layer should definitely not be the only location for validation. If validation fails here, it usually means that some sort of validation is missing in other application components. Persistence layer validation should be seen as the last line of defense. In addition to that, the persistence layer is usually too late for business related validation.

    Method parameter validation

    Another option is the method parameter validation provided by Spring. This allows us to add Bean Validation annotations to method parameters. Spring then uses an AOP interceptor to validate the parameters before the actual method is called.

    For example:

    @Service
    @Validated
    public class CustomerService {
    
        public void updateAddress(
                @Pattern(regexp = "\\w{2}\\d{8}") String customerId,
                @Valid Address newAddress
        ) {
            // ..
        }
    }

    This approach can be useful to validate data coming into your service layer. However, before committing to this approach you should be aware of its limitations as this type of validation only works if Spring proxies are involved. See my separate post about Method parameter validation for more details.

    Note that this approach can make unit testing harder. In order to test validation constraints in your services you now have to bootstrap a Spring application context.

    Triggering Bean Validation programmatically

    In the previous validation solutions the actual validation is triggered by Spring or Hibernate. However, it can be quite viable to trigger validation manually. This gives us great flexibility in integrating validation into the appropriate location of our application.

    We start by creating a ValidationFacade bean:

    @Component
    public class ValidationFacade {
    
        private final Validator validator;
    
        public ValidationFacade(Validator validator) {
            this.validator = validator;
        }
    
        public <T> void validate(T object, Class<?>... groups) {
            Set<ConstraintViolation<T>> violations = validator.validate(object, groups);
            if (!violations.isEmpty()) {
                throw new ConstraintViolationException(violations);
            }
        }
    }

    This bean accepts a Validator as constructor parameter. Validator is part of the Bean Validation API and responsible for validating Java objects. An instance of Validator is automatically provided by Spring, so it can be injected into our ValidationFacade.

    Within the validate(..) method we use the Validator to validate a passed object. The result is a Set of ConstraintViolations. If no validation constraints are violated (= the object is valid) the Set is empty. Otherwise, we throw a ConstraintViolationException.

    We can now inject our ValidationFacade into other beans. For example:

    @Service
    public class CustomerService {
    
        private final ValidationFacade validationFacade;
    
        public CustomerService(ValidationFacade validationFacade) {
            this.validationFacade = validationFacade;
        }
    
        public void updateAddress(String customerId, Address newAddress) {
            validationFacade.validate(newAddress);
            // ...
        }
    }

    To validate an object (here newAddress) we simply have to call the validate(..) method of ValidationFacade. Of course we could also inject the Validator directly in our CustomerService. However, in case of validation errors we usually do not want to deal with the returned Set of ConstraintViolations. Instead it is likely we simply want to throw an exception, which is exactly what ValidationFacade is doing.

    Often this is a good approach for validation in the service/business layer. It is not limited to method parameters and can be used with different types of objects. For example, we can load an object from the database, modify it and then validate it before we continue.

    This way is also quite good to unit test as we can simply mock ValidationFacade. In case we want real validation in unit tests, the required Validator instance can be created manually (as shown in the next section). Both cases do not require to bootstrap a Spring application context in our tests.

    Validating inside business classes

    Another approach is to move validation inside your actual business classes. When doing Domain Driven Design this can be a good fit. For example, when creating an Address instance the constructor can make sure we are not able to construct an invalid object:

    public class Address {
    
        @NotBlank
        @Size(max = 50)
        private String street;
    
        @NotBlank
        @Size(max = 50)
        private String city;
    
        ...
        
        public Address(String street, String city) {
            this.street = street;
            this.city = city;
            ValidationHelper.validate(this);
        }
    }

    Here the constructor calls a static validate(..) method to validate the object state. This static validate(..) methods looks similar to the previously shown method in ValidationFacade:

    public class ValidationHelper {
    
        private static final Validator validator = Validation.buildDefaultValidatorFactory().getValidator();
    
        public static <T> void validate(T object, Class<?>... groups) {
            Set<ConstraintViolation<T>> violations = validator.validate(object, groups);
            if (!violations.isEmpty()) {
                throw new ConstraintViolationException(violations);
            }
        }
    }

    The difference here is that we do not retrieve the Validator instance by Spring. Instead, we create it manually by using:

    Validation.buildDefaultValidatorFactory().getValidator()

    This way we can integrate validation directly into domain objects without relying on someone outside to validate the object.

    Summary

    We saw different ways to deal with validation in Spring Boot applications. Validating incoming request data is good to reject nonsense as early as possible. Persistence layer validation should only be used as additional layer of safety. Method validation can be quite useful, but make sure you understand the limitations. Even if triggering Bean Validation programmatically takes a bit more effort, it is usually the most flexible way.

    You can find the source code for the shown examples on GitHub.

  • Sunday, 17 January, 2021

    REST: Partial updates with PATCH

    In previous posts we learned how to update/replace resources using the HTTP PUT operation. We also learned about the differences between POST, PUT and PATCH. In this post we will now see how to perform partial updates with the HTTP PATCH method.

    Before we start, let's quickly check why partial updates can be useful:

    • Simplicity - If a client only wants to update a single field, a partial update request can be simpler to implement.
    • Bandwidth - If your resource representations are quite large, partial updates can reduce the amount of bandwidth required.
    • Lost updates - Resource replacements with PUT can be susceptible for the lost update problem. While partial updates do not solve this problem, they can help reducing the number of possible conflicts.

    The PATCH HTTP method

    Other like PUT or POST the PATCH method is not part of the original HTTP RFC. It has later been added via RFC 5789. The PATCH method is neither safe nor idempotent. However, PATCH it is often used in an idempotent way.

    A PATCH request can contain one or more requested changes to a resource. If more than one change is requested the server must ensure that all changes are applied atomically. The RFC says:

    The server MUST apply the entire set of changes atomically and never provide ([..]) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.

    The request body for PATCH is quite flexible. The RFC only says the request body has to contain instructions on how the resource should be modified:

    With PATCH, [..], the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.  

    This means we do not have to use the same resource representation for PATCH requests as we might use for PUT or GET requests. We can use a completely different Media-Type to describe the resource changes.

    PATCH can be used in two common ways which both have their own pros and cons. We will look into both of them in the next sections.

    Using the standard resource representation to send changes (JSON Merge Patch)

    The most intuitive way to use PATCH is to keep the standard resource representation that is used in GET or PUT requests. However, with PATCH we only include the fields that should be changed.

    Assume we have a simple product resource. The response of a simple GET request might look like this:

    GET /products/123
    
    {
        "name": "Cool Gadget",
        "description": "It looks very cool",
        "price": 4.50,
        "dimension": {
            "width": 1.3,
            "height": 2.52,
            "depth": 0.9
        }
        "tags": ["cool", "cheap", "gadget"]
    }

    Now we want to increase the price, remove the cheap tag and update the product width. To accomplish this, we can use the following PATCH request:

    PATCH /products/123
    {
        "price": 6.20,
        "dimension": {
            "width": 1.35
        }
        "tags": ["cool", "gadget"]
    }

    Fields not included in the request should stay unmodified. In order to remove an element from the tags array we have to include all remaining array elements.

    This usage of PATCH is called JSON Merge Patch and is defined in RFC 7396. You can think of a PUT request that only uses a subset of fields. Patching this way makes PATCH requests usually idempotent.

    JSON Merge Patch and null values

    There is one caveat with JSON Merge Patch you should be aware of: The processing of null values.

    Assume we want to remove the description of the previously used product resource. The PATCH request looks like this:

    PATCH /products/123
    {
        "description": null
    }

    To fulfill the client's intent the server has to differentiate between the following situations:

    • The description field is not part of the JSON document. In this case, the description should stay unmodified.
    • The description field is part of the JSON document and has the value null. Here, the server should remove the current description.

    Be aware of this differentiation when using JSON libraries that map JSON documents to objects. In strongly typed programming languages like Java it is likely that both cases produce the same result when mapped to a strongly typed object (the description field might result in being null in both cases).

    So, when supporting null values, you should make sure you can handle both situations.

    Using a separate Patch format

    As mentioned earlier it is fine to use a different media type for PATCH requests.

    Again we want to increase the price, remove the cheap tag and update the product width. A different way to accomplish this, might look like this:

    PATCH /products/123
    {
        "$.price": {
            "action": "replace",
            "newValue": 6.20
        },
        "$.dimension.width": {        
            "action": "replace",
            "newValue": 1.35
        },
        "$.tags[?(@ == 'cheap')]": {
            "action": "remove"
        }
    }

    Here we use JSONPath expressions to select the values we want to change. For each selected value we then use a small JSON object to describe the desired action.

    To replace simple values this format is quite verbose. However, it also has some advantages, especially when working with arrays. As shown in the example we can remove an array element without sending all remaining array elements. This can be useful when working with large arrays.

    JSON Patch

    A standardized media type to describe changes using JSON is JSON Patch (described in RFC 6902). With JSON Patch our request looks this:

    PATCH /products/123
    Content-Type: application/json-patch+json
    
    [
        { 
            "op": "replace", 
            "path": "/price", 
            "value": 6.20
        },
        {
            "op": "replace",
            "path": "/dimension/width",
            "value": 1.35
        },
        {
            "op": "remove", 
            "path": "/tags/1"
        }
    ]

    This looks a bit similar to our previous solution. JSON Patch uses the op element to describe the desired action. The path element contains a JSON Pointer (yet another RFC) to select the element to which the change should be applied.

    Note that the current version of JSON Patch does not support removing an array element by value. Instead, we have to remove the element using the array index. With /tags/1 we can select the second array element.

    Before using JSON Patch, you should evaluate if it fulfills your needs and if you are fine with its limitations. In the issues of the GitHub repository json-patch2 you can find a discussion about a possible revision of JSON Patch.

    If you are using XML instead of JSON you should have a look at XML Patch (RFC 5261) which works similar, but uses XML.

    The Accept-Patch header

    The RFC for HTTP PATCH also defines a new response header for HTTP OPTIONS requests: Accept-Patch. With Accept-Patch the server can communicate which media types are supported by the PATCH operation for a given resource. The RFC says:

    Accept-Patch SHOULD appear in the OPTIONS response for any resource that supports the use of the PATCH method.

    An example HTTP OPTIONS request/response for a resource that supports the PATCH method and uses JSON Patch might look like this:

    Request:

    OPTIONS /products/123

    Response:

    HTTP/1.1 200 OK
    Allow: GET, PUT, POST, OPTIONS, HEAD, DELETE, PATCH
    Accept-Patch: application/json-patch+json

    Responses to HTTP PATCH operations

    The PATCH RFC does not mandate how the response body of a PATCH operation should look. It is fine to return the updated resource. It is also fine to leave the response body empty.

    The server responds to HTTP PATCH requests usually with one of the following HTTP status codes:

    • 204 (No Content) - Indicates that the operation has been completed successfully and no data is returned
    • 200 (Ok) - The operation has been completed successfully and the response body contains more information (for example the updated resource).
    • 400 (Bad request) - The request body is malformed and cannot be processed.
    • 409 (Conflict) - The request is syntactically valid but cannot be applied to the resource. For example it can be used with JSON Patch if the element selected by a JSON pointer (the path field) does not exist.

    Summary

    The PATCH operation is quite flexible and can be used in different ways. JSON Merge Patch uses standard resource representations to perform partial updates. JSON Patch however uses a separate PATCH format to describe the desired changes. it also fine to come up with a custom PATCH format. Resources that support the PATCH operation should return the Accept-Patch header for OPTIONS requests.