Almost every backend application that is created today has a REST API. And most “serious” applications have a hundred database tables or more. In such case there are two ways to create a consistent REST API: code generation and reflection in runtime. Both methods have upsides and downsides. I have implemented a runtime reflection based REST API before (in PHP) and have now done this again in a Java library.
Ambitions are high
I have seen many slow and inconsistent REST API’s in my career. That’s why the ambitions of this library are high:
- high performance: > 5000 transactions per second on a laptop
- easy to use Spring Boot add-on for most databases (using jOOQ)
- Production ready with hooks to add security and validation
It all boils down to: you should be tempted to use it on your next OLTP build!
Opinionated
The library is certainly opinionated, so if you disagree with any of the following you should probably look at other tools, such as spring-data-rest.
1) Single fielded technical (auto increment or UUID) primary keys only.
Primary keys are fields in the record that uniquely identify the record and that do not change. For example: An email address may be unique for a user, but it may be changed, so it does not make a good primary key. You can only guarantee that a value won’t change when it does not mean anything. Therefore technical primary keys are the way to go.
2) The difference of REST’s PUT vs. PATCH cannot be applied to databases.
If PUT is a full update, then that can never be successfully executed as identity columns (primary keys) may not be updated. If you are leaving out primary keys, then you should be using PATCH (according to the standard). When you are creating a record you may be leaving out values as well, as left out fields have a specific behavior in SQL: during creation they are set to default. When we are updating a record we can also specify the fields to update. So we should either always use PATCH or allow partial updates via PUT. I think the latter makes more sense.
3) No HATEOAS such as HAL. Metadata is exposed on different endpoints.
If you want metadata, then you may request it on a separate endpoint. There is really no benefit in sending both in the same request. I’m not saying metadata is useless, on the contrary: There are great initiatives, such as Swagger, that allow you to use the metadata to generate documentation.
Features
This is where the library should really stand out. All features you can dream of should be included. So much that you would get demotivated to “roll your own”. A selection of the features I want to implement:
- Supports POST variables as input (x-www-form-urlencoded)
- Supports a JSON object as input
- Supports a JSON array as input (batch insert)
- Supports file upload from web forms (multipart/form-data)
- Condensed JSON output: first row contains field names (non-default)
- Sanitize and validate input using callbacks
- Permission system for databases, tables, columns and records
- Multi-tenant database layouts are supported
- Multi-domain CORS support for cross-domain requests
- Combined requests with support for multiple table names
- Search support on multiple criteria
- Pagination, seeking, sorting and column selection
- Relation detection and filtering on foreign keys
- Foreign keys are turned into objects on demand
- Atomic increment support via PATCH (for counters)
- Binary fields supported with base64 encoding
- Spatial/GIS fields and filters supported with WKT
- Unstructured data support through JSON/JSONB
- Generate API documentation using Swagger tools
- Authentication via JWT token or username/password
Currently many features are already working and I would invite you to give it a try.