mscharhag, Programming and Stuff;

A blog about programming and software development topics, mostly focused on Java technologies including Java EE, Spring and Grails.

  • Wednesday, 6 October, 2021

    Media types and the Content-Type header

    A Media type (formerly known as MIME type) is an identifier for file formats and format contents. Media types are used by different internet technologies like e-mail or HTTP.

    Media types consist of a type and a subtype. It can optionally contain a suffix and one or more parameters. Media types follow this syntax:

    type "/" [tree "."] subtype ["+" suffix]* [";" parameter]
    

    For example the media type for JSON documents is:

    application/json

    It consists of the type application with the subtype json.

    A HTML document with UTF-8 encoding can be expressed as:

    text/html; charset=UTF-8

    Here we have the type text, the subtype html and a parameter charset=UTF-8 indicating UTF-8 character encoding.

    A suffix can be used to specify the underlying format of a media type. For example, SVG images use the media type:

    image/svg+xml

    The type is image, svg is the subtype and xml the suffix. The suffix tells us that the SVG file format is based on XML.

    Note that subtypes can be organized in a hierarchical tree structure. For example, the binary format used by Apache Thrift uses the following media type:

    application/vnd.apache.thrift.binary

    vnd is a standardized prefix that tells us this is a vendor specific media type.

    The Content-Type header

    With HTTP any message that contains an entity-body should include a Content-Type header to define the media type of the body.

    The RFC says:

    Any HTTP/1.1 message containing an entity-body SHOULD include a Content-Type header field defining the media type of that body. If and only if the media type is not given by a Content-Type field, the recipient MAY attempt to guess the media type via inspection of its content and/or the name extension(s) of the URI used to identify the resource. If the media type remains unknown, the recipient SHOULD treat it as type "application/octet-stream".

    The RFC allows clients to guess the media type if the Content-Type header is not present. However, this should be avoided in any case.

    Guessing the media-type of a piece of data is called Content sniffing (or MIME-sniffing). This practice was (and sometimes is still) used by web browsers and accounts for multiple security vulnerabilities. To explicitly tell browsers not to guess certain media types the following header can be added:

    X-Content-Type-Options: nosniff

    Note that the Content-Type header always contains the media type of the original resource, before any content encoding is applied. Content encoding (like gzip compression) is indicated by the Content-Encoding header.

  • Monday, 20 September, 2021

    From layers to onions and hexagons

    In this post we will explore the transition from a classic layered software architecture to a hexagonal architecture. The hexagonal architecture (also called ports and adapters architecture) is a design pattern to create loosely coupled application components.

    This post was inspired by a German article from Silas Graffy called Von Schichten zu Ringen - Hexagonale Architekturen erklärt.

    Classic layers

    Layering is one of the most widely known techniques to break apart complicated software systems. It has been promoted in many popular books, like Patterns of Enterprise Application Architecture by Martin Fowler.

    Layers allows us to build software on top of a lower level layer without knowing the details about any of the lower level layers. In an ideal world we can even replace lower level layers with different implementations. While the number of layers can vary we mostly see three or four layers in practice.

    Here, we have an example diagram of a three layer architecture:

    The presentation layer contains components related to user (or API) interfaces. In the domain layer we find the logic related to the problem the application solves. The database access layer is responsible database interaction.

    The dependency direction is from top to bottom. The code in the presentation layer depends on code in the domain layer which itself does depend on code located in the database layer.

    As an example we will examine a simple use-case: Creation of a new user. Let's add related classes to the layer diagram:

    In the database layer we have a UserDao class with a saveUser(..) method that accepts a UserEntity class. UserEntity might contain methods required by UserDao for interacting with the database. With ORM-Frameworks (like JPA) UserEntity might contain information related to object-relational mapping.

    The domain layer provides a UserService and a User class. Both might contain domain logic. UserService interacts with UserDao to save a User in the database. UserDao does not know about the User object, so UserService needs to convert User to UserEntity before calling UserDao.saveUser(..).

    In the Presentation layer we have a UserController class which interacts with the domain layer using UserService and User classes. The presentation also does have its own class to represent a user: UserDto might contain utility methods to format field values for presentation in a user interface.

    What is the problem with this?

    We have some potential problems to discuss here.

    First we can easily get the impression that the database is the most important part of the system as all other layers depend on it. However, in modern software development we no longer start with creating huge ER-diagrams for the database layer. Instead, we usually (should) focus on the business domain.

    As the domain layer depends on the database layer the domain layer needs to convert its own objects (User) to objects the database layer knows how to use (UserEntity). So we have code that deals with database layer specific classes located in the domain layer. Ideally we want to have the domain layer to focus on domain logic and nothing else.

    The domain layer is directly using implementation classes from the database layer. This makes it hard to replace the database layer with different implementations. Even if we do not want to plan for replacing the database with a different storage technology this is important. Think of replacing the database layer with mocks for unit testing or using in-memory databases for local development.

    Abstraction with interfaces

    The latest mentioned problem can be solved by introducing interfaces. The obvious and quite common solution is to add an interface in the database layer. Higher level layers use the interface and do not depend on implementation classes.

    Here we split the UserDao class into an interface (UserDao) and an implementation class (UserDaoImpl). UserService only uses the UserDao interface. This abstraction gives us more flexibility as we can now change UserDao implementations in the database layer.

    However, from the layer perspective nothing changed. We still have code related to the database layer in our domain layer.

    Now, we can do a little bit of magic by moving the interface into the domain layer:

    Note we did not just move the UserDao interface. As UserDao is now part of the domain layer, it uses domain classes (User) instead of database related classes (UserEntity).

    This little change is reversing the dependency direction between domain and database layers. The domain layer does no longer depend on the database layer. Instead, the database layer depends on the domain layer as it requires access to the UserDao interface and the User class. The database layer is now responsible for the conversion between User and UserEntity.

    In and out

    While the dependency direction has been changed the call direction stays the same:

    The domain layer is the center of the application. We can say that the presentation layer calls in the domain layer while the domain layer calls out to the database layer.

    As a next step, we can split layers into more specific components. For example:

    This is what hexagonal architecture (also called ports and adapters) is about.

    We no longer have layers here. Instead, we have the application domain in the center and so-called adapters. Adapters provide additional functionality like user interfaces or database access. Some adapters call in the domain center (here: UI and REST API) while others are outgoing adapters called by the domain center via interfaces (here database, message queue and E-Mail)

    This allows us the separate pieces of functionality into different modules/packages while the domain logic does not have any outside dependencies.

    The onion architecture

    From the previous step it is easy to move to the onion architecture (sometimes also called clean architecture).

    The domain center is split into the domain model and domain services (sometimes called use cases). Application services contains incoming and outgoing adapters. On the out-most layer we locate infrastructure elements like databases or message queues.

    What to remember?

    We looked at the transition from a classic layered architecture to more modern architecture approaches. While the details of hexagonal architecture and onion architecture might vary, both share important parts:

    • The application domain is the core part of the application without any external dependencies. This allows easy testing and modification of domain logic.
    • Adapters located around the domain logic talk with external systems. These adapters can easily be replaced by different implementations without any changes to the domain logic.
    • The dependency direction always goes from the outside (adapters, external dependencies) to the inside (domain logic).
    • The call direction can be in and out of the domain center. At least for calling out of the domain center, we need interfaces to assure the correct dependency direction.

    Further reading

  • Wednesday, 28 July, 2021

    File down- and uploads in RESTful web services

    Usually we use standard data exchange formats like JSON or XML with REST web services. However, many REST services have at least some operations that can be hard to fulfill with just JSON or XML. Examples are uploads of product images, data imports using uploaded CSV files or generation of downloadable PDF reports.

    In this post we focus on those operations, which are often categorized as file down- and uploads. This is a bit flaky as sending a simple JSON document can also be seen as a (JSON) file upload operation.

    Think about the operation you want to express

    A common mistake is to focus on the specific file format that is required for the operation. Instead, we should think about the operation we want to express. The file format just decides the Media Type used for the operation.

    For example, assume we want to design an API that let users upload an avatar image to their user account.

    Here, it is usually a good idea to separate the avatar image from the user account resource for various reasons:

    • The avatar image is unlikely to change so it might be a good candidate for caching. On the other, hand the user account resource might contain things like the last login date which changes frequently.
    • Not all clients accessing the user account might be interested in the avatar image. So, bandwidth can be saved.
    • For clients it is often preferable to load images separately (think of web applications using <img> tags)

    The user account resource might be accessible via:

    /users/<user-id>

    We can come up with a simple sub-resource representing the avatar image:

    /users/<user-id>/avatar

    Uploading an avatar is a simple replace operation which can be expressed via PUT:

    PUT /users/<user-id>/avatar
    Content-Type: image/jpeg
    
    <image data>
    

    In case a user wants to delete his avatar image, we can use a simple DELETE operation:

    DELETE /users/<user-id>/avatar
    

    And of course clients need a way to show to avatar image. So, we can provide a download operation with GET:

    GET /users/<user-id>/avatar
    

    which returns

    HTTP/1.1 200 Ok
    Content-Type: image/jpeg
    
    <image data>
    

    In this simple example we use a new sub-resource with common update, delete, get operations. The only difference is we use an image media type instead of JSON or XML.

    Let's look at a different example.

    Assume we provide an API to manage product data. We want to extend this API with an option to import products from an uploaded CSV file. Instead of thinking about file uploads we should think about a way to express a product import operation.

    Probably the simplest approach is to send a POST request to a separate resource:

    POST /product-import
    Content-Type: text/csv
    
    <csv data>
    

    Alternatively, we can also see this as a bulk operation for products. As we learned in another post about bulk operations with REST, the PATCH method is a possible way to express a bulk operation on a collection. In this case, the CSV document describes the desired changes to product collection.

    For example:

    PATCH /products
    Content-Type: text/csv
    
    action,id,name,price
    create,,Cool Gadget,3.99
    create,,Nice cap,9.50
    delete,42,,
    

    This example creates two new products and deletes the product with id 42.

    Processing file uploads can take a considerable amount of time. So think about designing it as an asynchronous REST operation.

    Mixing files and metadata

    In some situations we might need to attach additional metadata to a file. For example, assume we have an API where users can upload holiday photos. Besides the actual image data a photo might also contain a description, a location where it was taken and more.

    Here, I would (again) recommend using two separate operations for similar reasons as stated in the previous section with the avatar image. Even if the situation is a bit different here (the data is directly linked to the image) it is usually the simpler approach.

    In this case, we can first create a photo resource by sending the actual image:

    POST /photos
    Content-Type: image/jpeg
    
    <image data>

    As response we get:

    HTTP/1.1 201 Created
    Location: /photos/123

    After that, we can attach additional metadata to the photo:

    PUT /photos/123/metadata
    Content-Type: application/json
    
    {
        "description": "Nice shot of a beach in hawaii",
        "location": "hawaii",
        "filename": "hawaii-beach.jpg"
    }
    

    Of course we can also design it the other way around and send the metadata before the image.

    Embedding Base64 encoded files in JSON or XML

    In case splitting file content and metadata in seprate requests it not possible, we can embed files into JSON / XML documents using Base64 encoding. With Base64 encoding we can convert binary formats to a text representation which can be integrated in other text based formats, like JSON or XML.

    An example request might look like this:

    POST /photos
    Content-Type: application/json
    
    {
        "width": "1280",
        "height": "920",
        "filename": "funny-cat.jpg",
        "image": "TmljZSBleGFt...cGxlIHRleHQ="
    }

    Mixing media-types with multipart requests

    Another possible approach to transfer image data and metadata in a single request / response are multipart media types.

    Multipart media types require a boundary parameter that is used as delimiter between different body parts. The following request consists of two body parts. The first one contains the image while the second part contains the metadata.

    For example

    POST /photos
    Content-Type: multipart/mixed; boundary=foobar
    
    --foobar
    Content-Type: image/jpeg
    
    <image data>
    --foobar
    Content-Type: application/json
    
    {
        "width": "1280",
        "height": "920",
        "filename": "funny-cat.jpg"
    }
    --foobar--

    Unfortunately multipart requests / responses are often hard to work with. For example, not every REST client might be able to construct these requests and it can be hard to verify responses in unit tests.

    Interested in more REST related articles? Have a look at my REST API design page.

  • Sunday, 27 June, 2021

    Kotlin: Type conversion with adapters

    In this post we will learn how we can use Kotlin extension functions to provide a simple and elegant type conversion mechanism.

    Maybe you have used Apache Sling before. In this case, you are probably familiar with Slings usage of adapters. We will implement a very similar approach in Kotlin.

    Creating an extension function

    With Kotlins extension functions we can add methods to existing classes. The following declaration adds an adaptTo() method to all sub types of Any.

    inline fun <reified T : Any> Any.adaptTo(): T {
        ..
    }
    

    The generic parameter T parameter specifies the target type that should be returned by the method. We keep the method body empty for the moment.

    Converting an Object of type A to another object of type B will look like this with our new method:

    val a = A("foo")
    val b = a.adaptTo<B>()

    Providing conversion rules with adapters

    In order to implement the adaptTo() method we need a way to define conversion rules.

    We use a simple Adapter interface for this:

    interface Adapter {
        fun <T : Any> canAdapt(from: Any, to: KClass<T>): Boolean
        fun <T : Any> adaptTo(from: Any, to: KClass<T>): T
    }

    canAdapt(..) returns true when the implementing class is able to convert the from object to type to.

    adaptTo(..) performs the actual conversion and returns an object of type to.

    Searching for an appropriate adapter

    Our adaptTo() extension function needs a way to access available adapters. So, we create a simple list that stores our adapter implementations:

    val adapters = mutableListOf<Adapter>()
    

    Within the extension function we can now search the adapters list for a suitable adapter:

    inline fun <reified T : Any> Any.adaptTo(): T {
        val adapter = adapters.find { it.canAdapt(this, T::class) }
                ?: throw NoSuitableAdapterFoundException(this, T::class)
        return adapter.adaptTo(this, T::class)
    }
    
    class NoSuitableAdapterFoundException(from: Any, to: KClass<*>)
        : Exception("No suitable adapter found to convert $from to type $to")
    
    

    If an adapter is found that can be used for the requested conversion we call adaptTo(..) of the adapter and return the result. In case no suitable adapter is found a NoSuitableAdapterFoundException is thrown.

    Example usage

    Assume we want to convert JSON strings to Kotlin objects using the Jackson JSON library. A simple adapter might look like this:

    class JsonToObjectAdapter : Adapter {
        private val objectMapper = ObjectMapper().registerModule(KotlinModule())
    
        override fun <T : Any> canAdapt(from: Any, to: KClass<T>) = from is String
    
        override fun <T : Any> adaptTo(from: Any, to: KClass<T>): T {
            require(canAdapt(from, to))
            return objectMapper.readValue(from as String, to.java)
        }
    }

    Now we can use our new extension method to convert a JSON string to a Person object:

    data class Person(val name: String, val age: Int)
    
    fun main() {
        // register available adapter at application start
        adapters.add(JsonToObjectAdapter())
    
        ...
        
        // actual usage
        val json = """
            {
                "name": "John",
                "age" : 42
            }
        """.trimIndent()
    
        val person = json.adaptTo<Person>()
    }

    You can find the source code of the examples on GitHub.

    Within adapters.kt you find all the required pieces in case you want to try this on your own. In example-usage.kt you find some adapter implementations and usage examples.

  • Sunday, 13 June, 2021

    Making POST and PATCH requests idempotent

    In an earlier post about idempotency and safety of HTTP methods we learned that idempotency is a positive API feature. It helps making an API more fault-tolerant as a client can safely retry a request in case of connection problems.

    The HTTP specification defines GET, HEAD, OPTIONS, TRACE, PUT and DELETE methods as idempotent. From these methods GET, PUT and DELETE are the ones that are usually used in REST APIs. Implementing GET, PUT and DELETE in an idempotent way is typically not a big problem.

    POST and PATCH are a bit different, neither of them is specified as idempotent. However, both can be implemented with regard of idempotency making it easier for clients in case of problems. In this post we will explore different options to make POST and PATCH requests idempotent.

    Using a unique business constraint

    The simplest approach to provide idempotency when creating a new resource (usually expressed via POST) is a unique business constraint.

    For example, consider we want to create a user resource which requires a unique email address:

    POST /users
    
    {
        "name": "John Doe",
        "email": "john@doe.com"
    }

    If this request is accidentally sent twice by the client, the second request returns an error because a user with the given email address already exists. In this case, usually HTTP 400 (bad request) or HTTP 409 (conflict) is returned as status code.

    Note that the constraint used to provide idempotency does not have to be part of the request body. URI parts and relationship can also help forming a unique constraint.

    A good example for this is a resource that relates to a parent resource in a one-to-one relation. For example, assume we want to pay an order with a given order-id.

    The payment request might look like this:

    POST /order/<order-id>/payment
    
    {
        ... (payment details)
    }

    An order can only be paid once so /payment is in a one-to-one relation to its parent resource /order/<order-id>. If there is already a payment present for the given order, the server can reject any further payment attempts.

    Using ETags

    Entity tags (ETags) are a good approach to make update requests idempotent. ETags are generated by the server based on the current resource representation. The ETag is returned within the ETag header value. For example:

    Request

    GET /users/123

    Response

    HTTP/1.1 200 Ok
    ETag: "a915ecb02a9136f8cfc0c2c5b2129c4b"
    
    {
        "name": "John Doe",
        "email": "john@doe.com"
    }

    Now assume we want to use a JSON Merge Patch request to update the users name:

    PATCH /users/123
    If-Match: "a915ecb02a9136f8cfc0c2c5b2129c4b"
    
    {
        "name": "John Smith"
    }

    We use the If-Match condition to tell the server only to execute the request if the ETag matches. Updating the resource leads to an updated ETag on the server side. So, if the request is accidentally sent twice, the server rejects the second request because the ETag no longer matches. Usually HTTP 412 (precondition failed) should be returned in this case.

    I explained ETags a bit more detailed in my post about avoiding issues with concurrent updates.

    Obviously ETags can only be used if the resource already exists. So this solution cannot be used to ensure idempotency when a resource is created. On the good side this is a standardized and very well understood way.

    Using a separate idempotency key

    Yet another approach is to use a separate client generated key to provide idempotency. In this way the client generates a key and adds it to the request using a custom header (e.g. Idempotency-Key).

    For example, a request to create a new user might look like this:

    POST /users
    Idempotency-Key: 1063ef6e-267b-48fc-b874-dcf1e861a49d
    
    {
        "name": "John Doe",
        "email": "john@doe.com"
    }

    Now the server can persist the idempotency key and reject any further requests using the same key.

    There are two questions to think about with this approach:

    • How to deal with requests that have not been completed successfully (e.g. by returning HTTP 4xx or 5xx status codes)? Should the idempotency key be saved by the server in these cases? If so, clients always need to use a new idempotency key if they want to retry requests.
    • What to return if the server retrieves a request with an already known idempotency key.

    Personally I tend to save the idempotency key only if the request finished sucessfully. In the second case I would return HTTP 409 (conflict) to indicate that a request with the given idempotency key has already been executed.

    However, opinions can be different here. For example, the Stripe API makes use of an Idempotency-Key header. Stripe saves the idempotency key and the returned response in all cases. If a provided idempotency key is already present, the stored response gets returned without executing the operation again.

    The later can confuse the client in my opinion. On the other hand, it gives the client the option retrieve the response of a previously executed request again.

    Summary

    A simple unique business key can be used to provide idempotency for operations that create resources.

    For non-creating operations we can use server generated ETags combined with the If-Match header. This approach has the advantage of being standardized and widely known.

    As an alternative we can use a client generated idempotency key provided in a custom request header. The server saves those idempotency keys and rejects requests that contain an already used idempotency key. This approach can be used for all types of requests. However, it is not standardized and has some points to think about.

     

    Interested in more REST related articles? Have a look at my REST API design page.