We propose a principled replacement for these messy solutions: an isolated software layer that translates data between schemas on demand. This layer allows developers to maintain strong compatibility with many schema versions without complicating the main codebase. Translation logic is defined by composing bidirectional lenses, a kind of data transformation that can run both forward and backward.
What I fail to see, with this project (and others that try to solve or simplify this difficult problem) is why precisely (the devil is in the detail) it is a solution. It is different and worth a try, I’m not disputing that. But the description alone does not provide any convincing insight as to what makes this approach superior to the large number of similar attempts.
In the case of Project Cambria the idea is to move away from the codebase, that much is clear. To be convincing they would have to articulate the global benefit of this displacement. Why is it simpler, from an implementor point of view, to work on these external transformation lenses/modules rather than having them in the main codebase? This is a legitimate question that cannot be dismissed by an answer such as “it’s always bad to clutter the main codebase” (see how GitLab does a magnificent job at maintaining an huge code for a counter example).
I tend to like when the main codebase is minimal and responsibilities are clearly separated. But I also acknowledge that this comes at a cost: it is more complicated to define and work on. There is a delicate equilibrium between what is theoretically sane (ActivityPub and other data driven protocols) and what software developers are willing to use on a daily basis (REST apis and other tightly coupled means to interact with a web service). Mastodon is a fine example of how a web service mixes both approaches.
To be clear I’m not arguing the pro/cons of both approaches: that’s a debate that is way beyond my paygrade