mscharhag, Programming and Stuff;

A blog about programming and software development topics, mostly focused on Java technologies including Java EE, Spring and Grails.

  • Thursday, 19 November, 2020

    Validation in Kotlin: Valiktor

    Bean Validation is the Java standard for validation and can be used in Kotlin as well. However, there are also two popular alternative libraries for validation available in Kotlin: Konform and Valiktor. Both implement validation in a more kotlin-like way without annotations. In this post we will look at Valiktor.

    Getting started with Valiktor

    First we need to add the Valiktor dependency to our project.

    For Maven:

    <dependency>
      <groupId>org.valiktor</groupId>
      <artifactId>valiktor-core</artifactId>
      <version>0.12.0</version>
    </dependency>

    For Gradle:

    implementation 'org.valiktor:valiktor-core:0.12.0'

    Now let's look at a simple example:

    class Article(val title: String, val text: String) {
        init {
            validate(this) {
                validate(Article::text).hasSize(min = 10, max = 10000)
                validate(Article::title).isNotBlank()
            }
        }
    }

    Within the init block we call the validate(..) function to validate the Article object. validate(..) accepts two parameters: The object that should be validated and a validation function. In the validation function we define validation constraints for the Article class.

    Now we try to create an invalid Article object with:

    Article(title = "", text = "some article text")
    

    This causes a ConstraintViolationException to be thrown because the title field is not allowed to be empty.

    More validation constraints

    Let's look at a few more example validation rules:

    validate(this) {
        
        // Multiple constraints can be chained
        validate(Article::authorEmail)
                .isNotBlank()
                .isEmail()
                .endsWith("@cool-blog.com")
    
        // Nested validation
        // Checks that Article.category.name is not blank
        validate(Article::category).validate {
            validate(Category::name).isNotBlank()
        }
    
        // Collection validation
        // Checks that no Keyword in the keywords collection has a blank name
        validate(Article::keywords).validateForEach {
            validate(Keyword::name).isNotBlank()
        }
    
        // Conditional validation
        // if the article is published the permalink field cannot be blank
        if (isPublished) {
            validate(Article::permalink).isNotBlank()
        }
    }

    Validating objects from outside

    In the previous examples the validation constraints are implemented within the objects init block. However, it is also possible to perform the validation outside the class.

    For example:

    val person = Person(name = "")
    
    validate(person) {
        validate(Person::name).isNotBlank()
    }

    This validates the previously created Person object and causes a ConstraintViolationException to be thrown (because name is empty)

    Creating a custom validation constraint

    To define our own validation methods we need two things: An implementation of the Constraint interface and an extension method. The following snippet shows an example validation method to make sure an Interable<T> does not contain duplicate elements:

    object NoDuplicates : Constraint
    
    fun <E, T> Validator<E>.Property<Iterable<T>?>.hasNoDuplicates()
            = this.validate(NoDuplicates) { iterable: Iterable<T>? ->
    
        if (iterable == null) {
            return@validate true
        }
    
        val list = iterable.toList()
        val set = list.toSet()
        set.size == list.size
    }

    This adds a method named hasNoDuplicates() to Validator<E>.Property<Iterable<T>?>. So this method can be called for fields of type Iterable<T>. The extension method is implemented by calling validate(..) with our Constraint and passing a validation function.

    In the validation function we implement the actual validation. In this example we simply convert the Iterable to a List and then the List to a Set. If duplicate elements are present both collections have a different size (a Set does not contain duplicate elements).

    We can now use our hasNoDuplicates() validation method like this:

    class Article(val keywords: List<Keyword>) {
        init {
            validate(this) {
                validate(Article::keywords).hasNoDuplicates()
            }
        }
    }

    Conclusion

    Valiktor is an interesting alternative for validation in Kotlin. It provides a fluent DSL to define validation rules. Thoes rules are defined in standard Kotlin code (and not via annotations) which makes it easy to add conditional logic. Valiktor comes with many predefined validation constraints. Custom constraints easily be implemented using extension functions.

     

  • Friday, 6 November, 2020

    REST: Sorting collections

    When building a RESTful API we often want to give consumers the option to order collections in a specific way (e.g. ordering users by last name). If our API supports pagination this can be quite an important feature. When clients only query a specific part of a collection they are unable to order elements on the client.

    Sorting is typically implemented via Query-Parameters. In the next section we look into common ways to sort collections and a few things we should consider.

    Sorting by single fields

    The easiest way is to allow sorting only by a single field. In this case, we just have to add two query parameters for the field and the sort direction to the request URI.

    For example, we can sort a list of products by price using:

    GET /products?sort=price&order=asc

    asc and desc are usually used to indicate ascending and descending ordering.

    We can reduce this to a single parameter by separating both values with a delimiter. For example:

    GET /products?sort=price:asc

    As we see in the next section, this makes it easier for us to support sorting by more than one field.

    Sorting by multiple fields

    To support sorting by multiple fields we can simply use the previous one-parameter way and separate fields by another delimiter. For example:

    GET /products?sort=price:asc,name:desc

    It is also possible to use the same parameter multiple times:

    GET /products?sort=price:asc&sort=name:desc

    Note that using the same parameter multiple times is not exactly described in the HTTP RFC. However, it is supported by most web frameworks (see this discussion on Stackoverflow).

    Checking sort parameters against a white list

    Sort parameters should always be checked against a white list of sortable fields. If we pass sort parameters unchecked to the database, attackers can come up with requests like this:

    GET /users?sort=password:asc

    Yes, this would possibly not be a real issue if passwords are correctly hashed. However, I think you get the point. Even if the response does not contain the field we use for ordering, the simple order of collection elements could lead to unintended data exposure.

     

  • Monday, 2 November, 2020

    Improving Spring Mock-MVC tests

    Spring Mock-MVC can be a great way to test Spring Boot REST APIs. Mock-MVC allows us to test Spring-MVC request handling without running a real server.

    I used Mock-MVC tests in various projects and in my experience they often become quite verbose. This doesn't have to be bad. However, it often results in copy/pasting code snippets around in test classes. In this post we will look at a couple of ways to clean up Spring Mock-MVC tests.

    Decide what to test with Mock-MVC

    The first question we need to ask is what we want to test with Mock-MVC. Some example test scenarios are:

    • Testing only the web layer and mocking all controller dependencies.
    • Testing the web layer with domain logic and mocked third party dependencies like Databases or message queues.
    • Testing the complete path from web to database by replacing third party dependencies with embedded alternatives if possible (e.g. H2 or embedded-Kafka)

    All these scenarios have their own up- and downsides. However, I think there are two simple rules we should follow:

    • Test as much in standard JUnit tests (without Spring) as possible. This improves test performance a lot and makes tests often easier to write.
    • Pick the scenario(s) you want to test with Spring and be consistent in the dependencies you mock. This makes tests easier to understand and can speed them up as well. When running many different test configurations, Spring often has to re-initialize the application context which slows tests down.

    When using standard JUnit tests as much as possible the last scenario mentioned above is often a good fit. After we tested all logic with fast unit tests, we can use a few Mock-MVC tests to verify that all pieces work together, from controller to database.

    Cleaning up test configuration using custom annotations

    Spring allows us to compose multiple Spring annotations to a single custom annotation.

    For example, we can create a custom @MockMvcTest annotation:

    @SpringBootTest
    @TestPropertySource(locations = "classpath:test.properties")
    @AutoConfigureMockMvc(secure = false)
    @Retention(RetentionPolicy.RUNTIME)
    public @interface MockMvcTest {}

    Our test now only needs a single annotation:

    @MockMvcTest
    public class MyTest {
        ...
    }

    This way we can clean up tests from various annotations. This is also useful to standardize Spring configuration for our test scenarios.

    Improving Mock-MVC requests

    Let's look at the following example Mock-MVC request and see how we can improve it:

    mockMvc.perform(put("/products/42")
            .contentType(MediaType.APPLICATION_JSON)
            .accept(MediaType.APPLICATION_JSON)
            .content("{\"name\": \"Cool Gadget\", \"description\": \"Looks cool\"}")
            .header("Authorization", getBasicAuthHeader("John", "secr3t")))
            .andExpect(status().isOk());

    This sends a PUT request with some JSON data and an Authorization header to /products/42.

    The first thing that catches someone's eye is the JSON snippet within a Java string. This is obviously a problem as the double quote escaping required by Java strings makes it barely readable.

    Typically we should use an object that is then converted to JSON. Before we look into this approach, it is worth to mention Text blocks. Java Text blocks have been introduced in JDK 13 / 14 as preview feature. Text blocks are strings that span over multiple lines and require no double quote escaping.

    With text block we can format inline JSON in a prettier way. For example:

    mvc.perform(put("/products/42")
            .contentType(MediaType.APPLICATION_JSON)
            .accept(MediaType.APPLICATION_JSON)
            .content("""
                {
                    "name": "Cool Gadget",
                    "description": "Looks cool"
                }
                """)
            .header("Authorization", getBasicAuthHeader("John", "secr3t")))
            .andExpect(status().isOk());  

    In certain situations this can be useful.

    However, we should still prefer objects that are converted to JSON instead of manually writing and maintaining JSON strings.

    For example:

    Product product = new Product("Cool Gadget", "Looks cool");
    mvc.perform(put("/products/42")
            .contentType(MediaType.APPLICATION_JSON)
            .accept(MediaType.APPLICATION_JSON)
            .content(objectToJson(product))
            .header("Authorization", getBasicAuthHeader("John", "secr3t")))
            .andExpect(status().isOk());
    

    Here we create a product object and convert it to JSON with a small objectToJson(..) helper method. This helps a bit. Nevertheless, we can do better.

    Our request contains a lot of elements that can be grouped together. When building a JSON REST-API it is likely that we often have to send similar PUT request. Therefore, we create a small static shortcut method:

    public static MockHttpServletRequestBuilder putJson(String uri, Object body) {
        try {
            String json = new ObjectMapper().writeValueAsString(body);
            return put(uri)
                    .contentType(MediaType.APPLICATION_JSON)
                    .accept(MediaType.APPLICATION_JSON)
                    .content(json);
        } catch (JsonProcessingException e) {
            throw new RuntimeException(e);
        }
    }

    This method converts the body parameter to JSON using a Jackson ObjectMapper. It then creates a PUT request and sets Accept and Content-Type headers.

    This reusable method simplifies our test request a lot:

    Product product = new Product("Cool Gadget", "Looks cool");
    mvc.perform(putJson("/products/42", product)
            .header("Authorization", getBasicAuthHeader("John", "secr3t")))
            .andExpect(status().isOk())

    The nice thing here is that we do not lose flexibility. Our putJson(..) method returns a MockHttpServletRequestBuilder. This allows us to add additional request properties within tests if required (like the Authorization header in this example).

    Authentication headers are another topic we often have to deal with when writing Spring Mock-MVC tests. However, we should not add authentication headers to our previous putJson(..) method. Even if all PUT requests require authentication we stay more flexible if we deal with authentication in a different way.

    RequestPostProcessors can help us with this. As the name suggests, RequestPostProcessors can be used to process the request. We can use this to add custom headers or other information to the request.

    For example:

    public static RequestPostProcessor authentication() {
        return request -> {
            request.addHeader("Authorization", getBasicAuthHeader("John", "secr3t"));
            return request;
        };
    } 

    The authentication() method returns a RequestPostProcessor which adds Basic-Authentication to the request. We can apply this RequestPostProcessor in our test using the with(..) method:

    Product product = new Product("Cool Gadget", "Looks cool");
    mvc.perform(putJson("/products/42", product).with(authentication()))
            .andExpect(status().isOk())

    This does not only simplify our test request. If we change the request header format we now only need to modify a single method to fix the tests. Additionally putJson(url, data).with(authentication()) is also quite expressive to read.

    Improving response verification

    Now let's see how we can improve response verification.

    We start with the following example:

    mvc.perform(get("/products/42"))
            .andExpect(status().isOk())
            .andExpect(header().string("Cache-Control", "no-cache"))
            .andExpect(jsonPath("$.name").value("Cool Gadget"))
            .andExpect(jsonPath("$.description").value("Looks cool"));

    Here we check the HTTP status code, make sure the Cache-Control header is set to no-cache and use JSON-Path expressions to verify the response payload.

    The Cache-Control header looks like something we probably need to check for multiple responses. In this case, it can be a good idea to come up with a small shortcut method:

    public ResultMatcher noCacheHeader() {
        return header().string("Cache-Control", "no-cache");
    }

    We can now apply the check by passing noCacheHeader() to andExpect(..):

    mvc.perform(get("/products/42"))
            .andExpect(status().isOk())
            .andExpect(noCacheHeader())
            .andExpect(jsonPath("$.name").value("Cool Gadget"))
            .andExpect(jsonPath("$.description").value("Looks cool"));
    

    The same approach can be used to verify the response body.

    For example we can create a small product(..) method that compares the response JSON with a given Product object:

    public static ResultMatcher product(String prefix, Product product) {
        return ResultMatcher.matchAll(
                jsonPath(prefix + ".name").value(product.getName()),
                jsonPath(prefix + ".description").value(product.getDescription())
        );
    }

    Our test now looks like this:

    Product product = new Product("Cool Gadget", "Looks cool");
    mvc.perform(get("/products/42"))
            .andExpect(status().isOk())
            .andExpect(noCacheHeader())
            .andExpect(product("$", product));

    Note that the prefix parameter gives us flexibility. The object we want to check might not always be located at the JSON root level of the response.

    Assume a request might return a collection of products. We can then use the prefix parameter to select each product in the collection. For example:

    Product product0 = ..
    Product product1 = ..
    mvc.perform(get("/products"))
            .andExpect(status().isOk())
            .andExpect(product("$[0]", product0))
            .andExpect(product("$[1]", product1));
      

    With ResultMatcher methods you avoid scattering the exact response data structure over many tests. This again supports refactorings.

    Summary

    We looked into a few ways to reduce verbosity in Spring Mock-MVC tests. Before we even start writing Mock-MVC tests we should decide what we want to test and what parts of the application should be replaced with mocks. Often it is a good idea to test as much as possible with standard unit tests (without Spring and Mock-MVC).

    We can use custom test annotations to standardize our Spring Mock-MVC test setup. With small shortcut methods and RequestPostProcessors we can move reusable request code out of test methods. Custom ResultMatchers can be used to improve response checks.

    You can find the example code on GitHub.

  • Tuesday, 20 October, 2020

    REST: Updating resources

    When building RESTful APIs over HTTP the PUT method is typically used for updating, while POST is used for creating resources. However, create and update operations do not perfectly align with the HTTP verbs PUT and POST. In certain situations PUT can also be used for resource creation. See my post about the differences between POST, PUT and PATCH for more details.

    Within the next sections we will look at updating resources with PUT.

    Note that this post does not cover partial updates (e.g. updating only a single field) which can be done with HTTP PATCH. This topic will be covered in a separate future blog post.

    Updating resource with HTTP PUT

    HTTP PUT replaces the resource at the request URI with the given values. This means the request body has to contain all available values, even if we only want to update a single field.

    Assume we want to update the product with ID 345. An example request might look like this:

    PUT /products/345
    Content-Type: application/json
    
    {
        "name": "Cool Gadget",
        "description": "Looks cool",
        "price": "24.99 USD"
    }

    Responses to HTTP PUT update operations

    You can find various discussions about the question if an update via HTTP PUT should return the updated response.

    There is no single true here. If you think it is useful to return the updated resource in your situation: do it. Just make sure to be consistent for all update operations in your API.

    The server responds to HTTP PUT requests usually with one of the following HTTP status codes:

    • HTTP 200 (Ok): The request has been processes successfully and the response contains the updated resource.
    • HTTP 204 (No content): The request has been processed successfully. The updated resource is not part of the response.
    • HTTP 400 (Bad request): The operation failed due to invalid request parameters (e.g. missing or invalid values in the request body).

    Note that responses to HTTP PUT are not cacheable (See the last paragraph of RFC 7231 4.3.4).

    Replacing resources in real-life

    As mentioned earlier HTTP PUT replaces the resource at a given URI. In real-life this can lead to various discussions because resources are often not really replaced.

    Assume we send an GET request to the previously used product resource. The response payload might look like this:

    GET /products/345
    
    {
        "id": 345,
        "name": "Cool Gadget",
        "description": "Looks cool",
        "price": "24.99 USD",
        "lastUpdated": "2020-10-17T09:31:17",
        "creationDate": "2029-12-21T07:14:31",
        "_links": [
            { "rel": "self", "href": "/products/345"},
            ..
        ]
    }

    Besides name, description and price we get the product ID, creation and update dates and a hypermedia _links element.

    id and creationDate are set by the server when the resource is created. lastUpdated is set whenever the resource is updated. Resource links are built by the server based on the current resource state.

    In practice there is no reason why an update request needs to contain those fields. They are either ignored by the server or can only lead to HTTP 400 responses if the client sends unexpected values.

    One point can be made here about lastUpdated. It would be possible to use this field to detect concurrent modification on the server. In this case, clients send the lastUpdated field they retrieved via a previous GET request back to the server. On an update request the server can now compare the lastUpdated value from the request with the one stored on the server. If the server state is newer, the server responds with HTTP 409 (Conflict) to notify the client that the resource has been changed since the last GET request.

    However, the same can be accomplished using the HTTP ETag header in a more standardized way.

    Now it can be discussed if we really replace the resource if we do not send certain fields with the PUT request.

    I recommend being pragmatic and only require the fields that can be modified by the client. Other fields can be skipped. However, the server should not deny the request if other fields are sent. Those fields should just be ignored. This gives the client the option to retrieve the resource via a GET request, modify it and send it back to the server.

    HTTP PUT and idempotency

    The PUT method is idempotent. This means that multiple identical PUT requests must result in the same outcome. Typically no extra measures are required to achieve this as update behavior is usually idempotent.

    However, if we look at the previous example GET request, there is again something that can be discussed:

    Does the lastUpdated field break idempotency for update requests?

    There are (at least) two valid ways to implement a lastUpdated field on the server:

    • lastUpdated changes whenever the resource state changes. In this case we have no idempotency-issue. If multiple identical PUT requests are sent, only the first one changes the lastUpdated field.
    • lastUpdated changes with every update request even if the resource state does not change. Here lastUpdated tells us how up-to-date the resource state is (and not when it changed the last time). Sending multiple identical update requests results in a changing lastUpdated field for every request.

    I would argue that even the second implementation is not a real problem for idempotency.    

    The HTTP RFC says:

    Like the definition of safe, the idempotent property only applies to what has been requested by the user; a server is free to log each request separately, retain a revision control history, or implement other non-idempotent side effects for each idempotent request.

    A changing lastUpdated field can be seen as a non-idempotent side effect. It has not been actively requested by the user and is completely managed by the server.

  • Thursday, 15 October, 2020

    Spring Security: Delegating authorization checks to bean methods

    In this post we will learn how authorization checks can be delegated to bean methods with Spring Security. We will also learn why this can be very useful in many situations and how it improves testability of our application. Before we start, we will quickly look over common Spring Security authorization methods.

    Spring Security and authorization

    Spring Security provides multiple ways to deal with authorization. Some of them are based on user roles, others are based on more flexible expressions or custom beans. I don't want to go into details here, many articles are already available on this topic. Just to give you a quick overview, here are a few commented examples of common ways to define access rules with Spring Security:

    Restricting URL access via a WebSecurityConfigurerAdapter:

    public class SecurityConfig extends WebSecurityConfigurerAdapter {
        
        @Override
        protected void configure(HttpSecurity http) throws Exception {
            http.authorizeRequests()
            
                // restrict url access based on roles
                .antMatchers("/internal/**").hasRole("ADMIN")
                .antMatchers("/projects/**").hasRole("USER")
                
                // restrict url access based on expression
                .antMatchers("/users/{username}/profile")
                    .access("principal.username == #username");            
        }
    }

    Using annotations to restrict access to methods:

    @Service
    public class SomeService {
    
        // Using Springs @Secured annotation for role checks
        @Secured("ROLE_ADMIN")
        public void doAdminStuff() { }
    
        // Using JSR 250 RolesAllowed annotation for role checks
        @RolesAllowed("ROLE_ADMIN")
        public void doOtherAdminStuff() { }
    
        // Using Springs @PreAuthorize annotation with an expression 
        @PreAuthorize("hasRole('ADMIN') and hasIpAddress('192.168.1.0/24')")
        public void doMoreAdminStuff() { }
        
        // Using an expression to delegate to a PermissionEvaluator bean
        @PreAuthorize("hasPermission(#stuff, 'write')")
        public void doStuff(Stuff stuff) { }
    }

    What to use when?

    If roles are the only thing you need, it is easy. You just need to decide if you prefer defining the required roles based on URLs or based on methods in your Java code. If you prefer the later, just pick one annotation and use it consistently.

    In case you need some ACL-like security (e.g. User x has permission y on object z) using @PreAuthorize with hasPermission(..) and a custom PermissionEvaluator is often a good choice. Also, have a look at the Spring Security ACL support.

    However, there is a huge field between both approaches where roles are not enough but ACLs might be too fine grained or just the wrong tool. Here are a few example authorization rules that do not fit well into both solutions:

    Access to a resource should only be given ..

    • .. to the owner of the resource (e.g. a user can only change his own profile)
    • .. to users with role x from department y
    • .. during standard business times
    • .. to administrators who signed in using two-factor authentication
    • .. to users who connect from specific IP addresses

    All those examples can probably be solved by building a security expression and passing it to @PreAuthorize. However, in practice it is often not that simple.

    Let us look at the last example (the ip address check). The previously shown code snippet contains a @PreAuthorize example that does exactly this:

    @PreAuthorize("hasRole('ADMIN') and hasIpAddress('192.168.1.0/24')")
    

    This looks nice as an example and shows what you can do with security expressions. However, now consider:

    • You possibly need to define more than one IP range. So, you have to combine multiple hasIpAddress(..) checks.
    • You probably do not want to hard-code IP addresses in your code. Instead they should be resolved from configuration properties.
    • It is likely that you need the same access check in different parts of your code. You probably do not want it to duplicate it over and over.

    In other cases you might need to do a database look-up or call another external system to decide if a user is allowed to access a resource.

    Simple expressions are fine. However, if they get larger and are scattered all over a code base they can become painful to maintain.

    Side note: Spring Security implements method security by proxying the target bean. Security checks are then added via the proxy. If you don't know about proxies, you should probably read my post about the Proxy pattern.

    Delegating access decisions to beans

    Within security expressions we can reference beans using the @beanname syntax. This feature can help us to implement the previously described authentication rules.

    Let's look at an example:

    @Service
    public class ProjectService {
    
        @PreAuthorize("@projectAccess.canUpdateProjectName(#id)")
        public void updateProjectName(int id, String newName) {
            ...
        }
        
        @PreAuthorize("@projectAccess.canDeleteProject(#id)")
        public void deleteProject(int id) {
            ...
        }
    }

    Here we define a ProjectService class with two methods, both annotated with @PreAuthorize. Within the security expression we delegate the access check to methods of a bean named projectAccess. Relevant method parameters (here id) are passed to projectAccess methods.

    projectAccess looks like this:

    @Component("projectAccess")
    public class ProjectAccessHandler {
    
        private final ProjectRepository projectRepository;
        private final AuthenticatedUserService authenticatedUserService;
    
        public ProjectAccessHandler(ProjectRepository repo, AuthenticatedUserService aus) {
            this.projectRepository = repo;
            this.authenticatedUserService = aus;
        }
    
        public boolean canUpdateProjectName(int id) {
            return isProjectOwner(id);
        }
    
        public boolean canDeleteProject(int id) {
            return isProjectOwner(id);
        }
    
        private boolean isProjectOwner(int id) {
            User user = authenticatedUserService.getAuthenticatedUser();
            Project project = projectRepository.findById(id);
            return (project.getOwner().equals(user.getUsername()));
        }
    }

    It is a simple bean with two public methods that are called via security expressions. In both cases only the owner of the project is allowed to perform the operation. To determine the project owner we first have to look-up the related project by using a ProjectRepository bean.

    The injected AuthenticatedUserService is a simple facade around Spring Security's SecurityContextHolder:

    @Service
    public class AuthenticatedUserService {
        public User getAuthenticatedUser() {
            Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
            return (User) authentication.getPrincipal();
        }
    }

    This cleans up our code a little bit because it removes Spring Security internals (and the type cast) from our access control logic. It also becomes helpful when writing unit tests. This way we do not have to deal with static method calls during tests.

    Note we use the standard Spring Security User class for simplicity in this example. Often it is a good idea to create your own customized class as principal. However, this is something for another blog post.

    Testing access rules

    Another important benefit of this approach is that we can test access rules in simple unit tests. No Spring application context is required to evaluate @PreAuthorize expressions. This speeds up tests a lot.

    A simple test for canUpdateProjectName(..) might look like this:

    public class ProjectAccessHandlerTest {
    
        private ProjectRepository repository = mock(ProjectRepository.class);
        private AuthenticatedUserService service = mock(AuthenticatedUserService.class);
        private ProjectAccessHandler accessHandler = new ProjectAccessHandler(repository, service);
        private User john = new User("John", "password", Collections.emptyList());
    
        @Test
        public void canUpdateProjectName_isOwner() {
            Project project = new Project(1, "John", "John's project");
            when(repository.findById(1)).thenReturn(project);
            when(service.getAuthenticatedUser()).thenReturn(john);
            assertTrue(accessHandler.canUpdateProjectName(1));
        }
    
        @Test
        public void canUpdateProjectName_isNotOwner() {
            Project project = new Project(1, "Anna", "Anna's project");
            when(repository.findById(1)).thenReturn(project);
            when(service.getAuthenticatedUser()).thenReturn(john);
            assertFalse(accessHandler.canUpdateProjectName(1));
        }
    }

    Summary

    Many authorization requirements cannot be solved by using roles alone and ACLs often do not fit. In those situation it can be a viable solution to create separate beans for handling access checks. With @PreAuthorize we can delegate the authorization check to those beans. This also simplifies writing tests as we do not have to create a Spring application context to test authorization constraints.

    You can find the shown example code on GitHub.