It's interesting to compare and contrast this method of API management with Stripe.
As far as I understand, the Stripe api would continue to work indefinitely so long as you lock your api version, whereas Shopify would eventually break the app as they essentially backport breaking changes to older api versions.
Initially, I thought Stripe's method was superior, and would provide the best API experience, but realized that Stripe and Shopify have different incentives w/r/t their api.
For Stripe, breaking a functioning site harms revenue, and generally the developer is the Stripe customer.
For Shopify, their customer is the store owner, and for the most part, the store will continue to function because that is mostly controlled by shopify. The developer api is for added functionality, and it is in the interest of Shopify and the merchant that those apps continue to be updated and utilizing the latest features.
So, two different ways of managing breaking changes, but both are ultimately centered around providing the best customer experience.
>those apps continue to be updated and utilizing the latest features.
while this is orthogonal to Shopify API or your post at all, since you mentioned the need of updates, I just wanted to use that opportunity to vent my frustration with the constant push to update everything all the time and judging any piece of software by using "when was the last update" as a metric.
The problem I see is that not all apps or libs need (frequent) updates, many (maybe most) do need them, but some don't. They provide some functionality, they do that well and you could call them "complete". Maybe some security fix could be needed from time to time, but with a mature code being in use for many years event those are not frequent.
For example, consider something like ping utility. It does what it does for many decades. There was a need to add IPv6 support, but that was almost two decades ago. Why would anyone need to update it? I do not want any additional functionality, I don't want it to send emails or have social media share button. I want it to send ICMP echo requests and receive ICMP echo replies and nothing more. Aside from some security fixes no updates should be needed for 10+ years. This utility is done. It should not be thrown upon just because there were no updates for many years.
While of course neither ecommerce or Shopify platform are "done" and they get many updates now and will get updates in future it does not mean that some functionalities could have reached "done" stage.
For a "complete and done" addon, there could be a need for a security fix from time to time. There could be a need for some adjustments if a major browser introduces a new deviation from JS/CSS/HTML standards and forces everyone to update their code. But those events happen from time to time, possibly not that frequently. This means that some addon/plugin would not require any updates during the periods between those events and those periods could be many months/years long. But hey: "this addon did not receive any updates for 13 months, it must be really bad and should be avoided". This leads to a situation where a competing solution with tons of bugs will look better just because it receives two updates a week.
Well first of all I think it's a straw man to imply that anyone would want to send emails or share to Facebook out a ping utility. That's sort of a big thing that makes utilities different than applications.
Edit: I deleted a couple sentences and realize now this might not convey exactly what I meant. Utilities are easy to call "done," applications are not. Applications interact with external forces who do change constantly (other software, business processes, law and regulation, etc). I think in general, updating applications is a necessary thing, bordering on good, regardless of circumstances.
Companies may have really good reasons to break old APIs. I’ve done this before...retiring an old framework and set of APIs that was inherently much less secure than what the bulk of customers needed.
> I just wanted to use that opportunity to vent my frustration with the constant push to update everything all the time and judging any piece of software by using "when was the last update" as a metric.
The update in theses cases are not necessary to keep the most recent feature, but to stay connected to the interface that does change with time.
Someone needs to update the interface between the two while things evolve. Like it or not, but that's need to be done. If the tool is used and like by a few, then why not push that requirement toward the ones theses few that used it and like it, instead of supporting everything, even what's no longer used?
One reason would be to make sure somebody's still around. I'd make tiny updates to let dependents know that I'm still watching the code base. Where I work, code is (sadly enough) dead pretty much when workers' contracts or consultant hiring agreements end.
Sporadic updates tell those who depend on the code that stability and security updates are still being handled, even if the code is "complete". I'm not sure I'd have a lot of faith in code that looks abandoned.
You think "one thing" is probe connectivity over ICMP and I think "one thing" is probe connectivity. Someone else could reasonably argue that "one thing" is probe the network.
People use this phrase a lot but I feel that they emphasize the wrong part. It's all about "do it well", no matter what you do.
Funny that you mention Stripe, because it was definitely the canonical backwards compatibility API for me.
... Until just recently in mid Nov they changed some behavior that caused us to double/triple/quadruple charge customers unintentionally in some not-so-uncommon edge cases... I’m still trying to square this one with their support, so details are a bit thin. But definitely a big surprise for me to see this happening when previously I never imagined something like this can happen given the API versioning stability.
You know, you just jogged my memory here. We did have an issue with Stripe changing the order / timing of payment_failed webhooks, which caused them to be sent before the next payment attempt was known.
We used the payment failed hook to send an email to our customers and let them know when we'd be retrying the charge, which was no longer possible because the next_payment_attempt field was null. Before the change, a next_payment_attempt=null meant the charge would not be retried.
I reported the issue and the webhook changes were rolled back a month later. Really threw a wrench into our flow.
I will say that generally speaking, Stripe is the canonical backwards compatibility API in my mind. With a few edge cases.
Glad to see that Shopify has better API versioning on their mind. When I used there API a few years ago, it was one of the worst APIs to depend on.
To the point, we had to architect our system to alert us for unannounced breaking API changes so we could fix and replay the JSON back.
- Moving JSON fields in and out of nestings didn't seem to be counted as a breaking change.
- Changes were rarely announced, and there was never a changelog as to what had changed (they look to have started one starting 2018 [1])
- When we contacted support about a brake, they would often be surprised.
- Often the only sign there would be a change would be that new fields would start to show up before a larger change.
All this would happen every few months. Reading this article I can start to see the reasons why this was happening.
I maintain a Shopify API package for .NET [0] and this has largely been my experience as well. Their attitude toward breaking API changes has caused me a good deal of frustration in the past. To make matters worse, their docs weren't (and still aren't, in my opinion) that great; my biggest complaint being they typically don't document when values can be null or even have a different type (e.g. property X could be a string or a decimal, but you'll never know looking at their docs).
This has led me to taking the drastic step of making _every_ property nullable. It's gross and feels bad to use, but at least it prevents JSON parse operations from crashing applications when a value is unexpectedly null.
To be honest, when I read about these types of difficulties in managing changes in an API over time, I really wonder why more companies don't go over whole hog into GraphQL. GraphQL won't solve all your problems (sometimes changes are breaking because business needs require it), but GraphQL provides a better scheme for API evolution than any other API toolkit I've used:
1. You can just keep adding new methods and fields as needed, but since each client asks only for the fields specific to what they want, you don't get big bloated response objects.
2. Lots of times your breaking changes only differ slightly from previous versions, and the way GraphQL resolvers are written makes it really easy to refactor things into one base method that both the old and new versions can share.
3. Proper use of the @deprecated schema directive means your doc is 'clean' by always showing the latest version that new users should adopt, but the doc is still there for users on older versions.
4. It's really easy to add logging and tracing in your resolvers to see how often fields are being accessed and who is using them. At some point you may decide to break backwards compatibility by deleting old fields, but you'll know exactly who you are breaking.
The Shopify developer experience is terrible. No fluffy blog can change that fact.
Shopify says the main product is the store owner. But the developers pick up all of the slack of Shopify.
Recurring payments. App.
Store backups. App.
Theme backups. App.
Order editing. Came late 2019.
Checkout. So locked down. Where’s the API?!
Slate tooling. Abandoned.
Starter themes. Abandoned.
Storefront SDK. Terrible documentation.
More than 1 variant image? App.
Metafields. App.
Wholesale. App.
Mailchimp. Removed.
People talk about google abandoning products. Shopify abandons nearly all developer tooling and is so locked down that it’s a constant “app for that” for the basics.
The interesting thing is theres been more than a few store owners I know that use Shopify. They’ve asked how to move off of Shopify.
I guess fulfilment is more important though. Right?!
Shopify's business plan is offer minimum functionality for a low monthly fee and then users use the marketplace to add additional functionality which Shopify takes a 20% cut.
I'm actually working on an open-source fulfillment and operations app for Shopify:
But Shopify hold the transaction fee and charge 2% on top. So really in the end. Shopify is not that much cheaper. Considering bigcommerce doesnt charge this. I am bullish on the long term of bigcommerce. Or anything that goes after the small medium business.
I don’t think Shopify could sustain a real entrance by Adobe with Magento, or a product in the same space from someone like Microsoft. As these companies know developer tooling and in the end. Shopify needs developers. Developers don’t need Shopify.
Yes but their are ways to circumvent this. You could just use Shopify as a headless CMS for $9/month. Use their storefront API and a static site builder like Gatsby as your storefront. Then you can integrate your own payments and create the order on the backend. You could even use the new Stripe Checkout page.
They’re already heading towards Shopify’s lunch with commerce cloud. That’s quite obviously the outcome Adobe is after. It’s a subscription business after all.
Shopify made a change to their API that was easily measurable on who it would affect, but didn't email us. Refused to grant us a temporary exemption (they would do it for $2000/m they said). The end result? They've sunk my business. I've replaced shopify now by writing my own but it's too late. My customers have all gone to my competitors and we're looking at pivoting.
It was the change on limitations for variants. We woke up one day to product additions failing. Took about a week for the tech team to figure out that we were being throttled because we were exceeding 50k variants. Totally understandable in the bigger picture of things, but I didn't feel we received adequate warning. Developer support had told us that sometimes they grant temporary exemptions but in our case they refused. They advised we upgrade to shopify plus (they quoted us over $2000) to remove the throttling/limitation. Financially it was out of scope for us, so we had to throttle our own customers, which led to a massive disadvantage.
I ended up writing our own cart software, which was always part of the plan, but in the meantime our business suffered.
I don't hold any grudge about it since we were getting incredible value out of the previous arrangement, and I do think Shopify is amazing software. I've been a paying customer in multiple capacities since at least 2008 and recommend it to people all the time. What happened was just unfortunate timing for us.
Thanks for taking a minute to listen, though. Much appreciated.
You couldn't pay the 6,000-8,000 for like 3-4 months and keep your business afloat until you rewrote it? Kind of sounds like your business was really really in the red already, or more information is missing?
OP wrote that they have employees. if 6k-8k is too much for a company with employees to literally keep it running, that means it was months away from going completely bust anyway
I believe you may have lost a connection to the developer experience over time Tobi and developers keep Shopify relevant. Look at any store it’s packed full of slow loading plugins picking up some of the very basics. It might be that 2020 is better suited to making some of the tooling better and if we can hope for consistent.
What was the change? AFAIK Shopify would never give a temporary exemption for money, that's very much against their philosophy as a SaaS. Either you provide accurate details or this is just FUD.
It was a limitation on the number of variants. They had no limits prior to this API update I'm talking about. My business allowed people to create merch and we'd add it to a collection on our storefront. Developer support told us sometimes they grant temporary exemptions. They did not. They told us we could upgrade to shopify plus which has no such limitations, but being a bootstrapped company we couldn't swing the $2k per month.
Edit: Just to be clear, it was always on our roadmap to migrate away from Shopify's platform, but they accelerated our timeline and we had to limit the amount of merch our customers could add which obviously led to upset customers.
We're still operating, albeit close to insolvent, and have since launched our new platform. But our reputation has been irreversibly damaged.
Hi everyone. As the founder of an API company, this article has me wondering about the semantics of versioning — so I thought I’d ask the community a question. I hope the Shopify API team doesn’t mind!
When you all think about API versioning, what makes the most sense to you — a semver approach (major.minor.patch) a la NPM, or a date-based approach (2020-01-07) a la AWS? Or is some combination of the two desirable?
At Standard Library [0] we both allow people to publish APIs but also publish API proxies on behalf of partners (Stripe, Slack + others) using a semver approach. It’s not perfect but theoretically enforceable (schema parameter additions can be forced to require a minor update, schema parameter removals can be forced to require a major update). We’ve just stuck to this semver approach based on intuition and haven’t had negative feedback about it, but I do like the idea of time-based versioning.
Would love thoughts! If you want to play around you can build your own APIs using https://code.stdlib.com/, which uses the FunctionScript specification [1] to enforce HTTP request schemas.
I don't think there's one right answer, but my opinion is:
- Treat APIs as immutable.
- Any mutation results in a wholly new API version, not a patch or minor update.
- The developer will learn what changed, and how much changed by reading the patchnotes, not looking at which semver numbers changed. This is probably a healthy practice to encourage.
- Don't change APIs so much. The interface should be very carefully designed and tested and shipped like an NES cartridge: consider it impossible to fix once its shipped.
Therefore just start with `1` and increment each time.
The reason I don't like semver is that it's a developer convenience that leads to sloppy practices. You built your product against a specific API. If the API changes, you need to re-run your entire API evaluation, testing, blessing workflow. If the delta is tiny (what would have been a patch change) then yay, your task is likely going to be very simple. But you shouldn't see a bump version update and decide you can cut corners.
I don’t disagree with your assessment on semver as it currently exists for, say, NPM packages.
I do think that with web APIs specifically, the surface area is a lot smaller — the HTTP interface is literally all you touch — so semver, in its purest form, is actually completely enforceable as long as you understand the API schema.
Our team has talked a lot about either hard-enforcing or automatically applying semver where applicable. My question to you is — does this sound reasonable, and if you knew a semver contract was actually bound to implementation (i.e. guaranteed and not implied), would you trust it more?
I don't really trust semver insofar as there's nothing to trust, in my opinion. What am I trusting? That I don't have to understand what changed and can take semver's word for it? To me, personally, that's an abdication of my responsibility to know what I'm utilizing and implementing against.
What's the practical value you're hoping to gain by hard-enforcing semver, especially if the surface area of HTTP APIs is a lot smaller? And this isn't a loaded question. I might just not be privy to your team's needs.
Also consider (and this again falls within the realm of personal opinion) the value of achieving _maximum possible statelessness_ as a developer. If I'm reading you right, you're asking me to know and remember, and probably write down somewhere, that your API's versioning means something different than others. So I now have another branching path to maintain: how to behave when your API changes. What I like about the simple numbered API schema is that I can come back to the API usage in a stateless manner. I just see that the number is different and I know what to do: learn what changed and decide if I have to act on it. Of course I could do this by looking at a semver as a unique identifier without semantic meaning, but that kind of leads to my initial point: encourage best practice by not giving the developer a semantic shortcut to which they can abdicate responsibility.
I think you misunderstand where I’m coming from with my line of questioning.
I’m asking questions because we provide tools that standardize the API development experience. Semver is trustable if the tools create the contract instead of the developer that decides to build the API. We deliver the tools that create the contract.
In this sense, APIs built and delivered atop our platform don’t need to “encourage best practice,” best practice is — or at least can be — hardcoded in with no potential for footguns.
Your concerns about semver are well-founded. I’m asking questions because I’m pondering aloud if we can fix them and change semver — with respect to APIs — from a social contract to a coded contract, and include tools necessary to inform end-users of API changes. :)
Yeah it does. Thanks for clarifying. I think there's a very complex conversation to be had here, especially since you're working on a meta-level of tooling to ease API development.
With two days before my first vacation of the year, my brain is a pile of mush. I'm going to politely and apologetically bow out of this conversation. =)
I don't think semver is easily enforceable. Take for example the discussion in this thread about stripe changing the order in which hooks fired, which completely changed the meaning of a null value in the next_payment_attempt field. Just looking at a traditional API schema wouldn't reveal any change, yet this was a major breaking change to some. Maybe you can create an API schema which encompasses the order in which hooks fire, but that still doesn't help if somebody changes the semantics of a field without changing its name or type.
Tbh, it doesn't matter. But the semantics should be such that it doesn't matter -- the user of the API shouldn't care whether it's implementation bounded or not. But Everytime you break that abstraction, trust in the abstraction is necessarily reduced.
Bounding to implementation is just the easiest way out -- if your policies, tests and protocols consistently fail to uphold that abstraction, then the you can fallback to this very simple (presumably innefficient) strategy to do so.
But I, as a dev, just want a stable API, and I don't care how it's done.
How would your trust be impacted if there was better communication ahead of a breaking change you could almost agree with? i.e. the experience has been well beyond that which you currently expect and have been provided good reasons proactively.
The gap I see a lot starts with a bad versioning strategy and continues through bad API analytics to bad communications for developers.
The API provider should be able to understand all of the individual clients to the point where they can decide to progressively migrate them one-by-one, if necessary. There shouldn't be a need to use metadata like IP addresses or user-agents to try and identify the clients. Unless someone has walked through these scenarios ahead of time, it's difficult manage when issues arise.
What does a codebase look like with these API versioning exactly?
I feel like code bloat would be hard to maintain. Obviously I could take two endpoint handlers that have the same functionality and move their behavior into a shared function, but there's also tests to consider. As time went on, I'd have a pile of tests running against seemingly random versions.
Eg, `test_blog_create()` might test v1 and v2 of the API as that specific endpoint didn't change, but suddenly in v3 the API _does_ change, so now I need to write a slightly different test to handle that new functionality. `test_blog_create_v3()` or w/e.
I'm not arguing against it, merely noting what I've previously thought about as problematic for implementing versioned APIs.
Versioning is hard. Versioned APIs are very hard. I'm in an especially tricky spot because we've got customers that might refuse to upgrade our product for years and years, if ever.
Two approaches I've taken, with a blogpost worth of thoughts on them:
1. Every version is a whole new database/webserver stack. You shut the oldest one down when the last customer migrates off it. The bloat is real, but it's dead simple and it works.
2. A single database with a webserver that exposes all the versions. The contract is that the database shall not undergo destructive migrations. Only additive migrations. Or if you must, modify schema in a way that you can deterministically generate every previous API's view. Again, discard older versions when the last customer has been migrated off.
I'm sure you can immediately smell some problems with either of these approaches. Sacrifices are definitely made.
Every version is a whole new database/webserver stack. You shut the oldest one down when the last customer migrates off it. The bloat is real, but it's dead simple and it works.
That means data is different between API versions?
Yes. But that works. Becuase a customer's data lives in only one version. When the customer upgrades their data gets migrated. Weird. Dead simple. Works.
There is definitely multiple ways to handle that, but splitting your API into frontend and backend seems to be a common and good approach.
The backend is basically the "real" internal API, subject to changes.
The frontend consists of nothing more than forwarding functions, that take their input from the outer world and translate it to backend API calls.
How you organize Code reuse depends on the language, but inheritance is not a bad way to do it, although you might run into trouble with the size of your hirarchy. You can circumvent that with maps of functions/lambdas, like prototyping in Javascript.
For testing you then have your usual set of tests for the backend, but for the frontend you just test if the translation between public and internal API is correct. Very easy to test for in most cases.
If you then organize your tests exactly like your frontend Version, i.e. some kind of inheritance, you only ever have to add tests for the changes in your new Version.
It is definitely more work than just having one ever changing API, but it's not that much of a hassle imho.
I prefer semver, but it's still not exactly what I want. What I'm looking for is the answer to 'how painful/risky is upgrading likely to be?'. I expect an x.x.1 release to be zero/low risk - small fixes only within current api and contract. I expect a 2.x.x release to come with major risk of breakage (and I'll almost always want to wait for 2.0.2-2.0.3 before actually upgrading - and I expect a 1.x.x -> 2.x.x release to have a risk of bigger changes compared to a 5.x.x -> 6.x.x since the latter is expected to have the fundamentals ironed out by now). But there doesn't seem to be much consensus on whether an x.1.x release can have breaking changes or not - or maybe they don't come with as strict a definition of a breaking change as Shopify is using - so I'm left with treating them the same as 2.x.x (though usually I don't wait for x.x.1 in this case).
I’m pretty sure the semver definition for a x.2.x change is that you’re adding surface area to your interface, in a way that doesn’t overlap with existing surface area, i.e. bytes sent over the wire for x.1.x will result in the same response bytes for x.2.x, all else being equal.
Note that the phrase non-overlapping is where all the complexity is hidden; it’s actually tricky to guarantee that an addition hasn’t changed any existing queries. For example, adding an enum value will mess up clients who query with max(enum_value). Technically, they’re not sending the same bytes, so the change is non-overlapping, but the client might disagree :)
Would a tool that easily shows you documentation / schema changes between API versions be useful to you, or create more trust with a SaaS provider if they offered it?
I do not know any advantage of date-based versioning, other than someone knowing how new it is.
Semver is important for knowing whether or not to evaluate for breakages. You can theoretically combine both of these by making the date the patch version or supplying it as a version metadata
> I do not know any advantage of date-based versioning
It can be very useful for continuous improvement and handling forward compatibility. Here's how Stripe do it:https://stripe.com/en-fr/blog/api-versioning, we found that very convenient.
I think it’s the implied stability, especially for long-standing APIs. S3’s API is 2006-03-01 — meaning there haven’t been breaking changes in over 13 years. This creates a psychological contract with developers that nothing’s changing anytime soon.
The trade off is that AWS has some godawful APIs (DynamoDB has the least intuitive API I have ever worked with). But they’re stable.
If you go with a date-based approach and create a process + contract whereby you guarantee API stability and / or deprecation date, the developer always knows exactly how much time they have before an upgrade.
If you publish API version 1.0.0, and then internally you fix something and publish 1.0.1 (remember: a patch doesn't change the interface at all), should you continue to serve both the 1.0.0 and 1.0.1 APIs? How are they different from the consumer's perspective? What if your reason to release 1.0.1 is to fix a security issue - in what world is it ethical to continue to serve 1.0.0? If you can discontinue serving 1.0.0 at any time (because of a security patch released with 1.0.1), then you can't offer any long-term durability guarantees for early patch versions, and so indeed, offering older patch versions (when newer patches didn't fix security issues) is more likely to break your consumers when you're forced to discontinue older patch versions for security reasons, than if you refused to serve patch-level variants in the first place.
Because you, as upstream, have no control over whether the pure addition of fields will break downstream (since downstream may or may not be strict about what they accept), you should only offer semantically versioned minor releases if you're willing to guarantee durability for earlier minor versions, and can maintain multiple minor versions of your API in parallel. If not, then be explicit about the potential of your changes to break downstream - use timestamps and document sunset dates.
> When you all think about API versioning, what makes the most sense to you — a semver approach (major.minor.patch) a la NPM, or a date-based approach (2020-01-07) a la AWS? Or is some combination of the two desirable?
We're talking about HTTP APIs, right? Then neither. These are solutions with bad trade-offs for the problem at hand.
Version the link relations. This follows the principles that make the Web successful (a.k.a. REST). If that seems weird to you¹, consider the following:
You have a personal homepage type Web site. When you change or add or remove a document, do your users need to upgrade their user agent to keep using the site? Why not?
----
¹ Numerous developers are so enamoured with putting a version number into document URIs that they cannot fathom not doing it. This is another mutilation of the mind à la Dijkstra.
My impression is that semver makes far more sense for published libraries (DLLs or ruby gems or npm packages etc) while date-based makes far more sense for SaaS APIs. The constraints and usage patterns are fairly different between the two.
What specifically about SaaS API constraints do you think makes a date-based approach more appealing than semver?
My high-level feeling is that it’s just way more difficult to ship a SaaS API than a Ruby Gem, so adding semver to that is just another layer of API management everybody has to agree on. Do you agree with this assessment?
For most published libraries every old version is always available and upgrading is never mandatory. For SaaS APIs the constraints of the business means that very few businesses (Stripe being the notable exception) want to support more than a handful of versions at a time, which results in versions regularly reaching end-of-life and completely disappearing. In this world, upgrading to newer APIs is mandatory and fairly frequent.
The result is that API version handles need to prioritize different information. Libraries need a format that makes it easy to ballpark the size of changes between arbitrary not-strictly-sequential versions. SaaS APIs need a form that makes it easy to infer support windows and end-of-life status.
> "SaaS APIs need a form that makes it easy to infer support windows and end-of-life status."
You can infer that from the Release Date though. (e.g. Version 3.4.2 Released on 2019-12-17) To me the power of Semver is that it conveys complex relations between version iterations.
For example:
- 3.4.1 -> 3.4.2 is just fixing bugs in existing functionality
- 3.4.2 -> 3.5.0 is an upgrade containing non-breaking changes
- 3.5.0 -> 4.0.0 is an upgrade containing breaking changes
As a developer, Semver + release date seems to convey everything date based versioning does plus I get the advantage of understanding at glance the importance and risk profile of each release.
Note, this system does not reduce my obligation to run my own tests to verify that the version has in fact lived up to its intention (i.e. a minor version bump did not introduce a breaking change). Even though my obligation is not reduced, it does act as a filter to help prioritize development time for evaluation of performing upgrades.
But with semver you can say: ok, I want every new version up to a new minor version (all fixes). While with a date based version you don't know how 'breaking' the changes will be.
This is a good point. Good changelogs, like any documentation, take a lot of effort and so are unfortunately quite rare.
As an API consumer, to me the ideal upgrade-related documentation includes:
* Detailed release notes for each released version
* For non-GA releases like alpha/beta/RC's, describe the changes since the last small release -- typically consumers of a beta are following dev quite closely
* For GA releases, pretend the alpha/beta/etc don't exist at all. Describe the delta from the last GA release. Consumers don't care about the fix for a bug in a beta they never even knew existed.
* When you do breaking changes, provide upgrade guidance, like "If you were doing x before, now you should do y + z instead"
* If appropriate, consider also keeping separate upgrade guidance documentation for major breaking changes, for those laggard consumers that are going from a much older version like 2.x to 4.x. This allows someone to follow more of a checklist to get up-to-date without having to read 900 pages of individual release notes
You can do this with date-based version, but it's much harder as a consumer to figure out unless there's very good documentation. If I'm upgrading from "2.0.4" to "4.1.1", I know as a absolute minimum starting point I will be looking at release notes for "3.0" and "4.0" and from that, I should get a pretty good sense of the overall effort involved. If I'm upgrading from "2016-11-05" to "2019-12-19", how do I do the equivalent evaluation?
How often do you feel like you find / consume changelogs when you’re looking at APIs? I feel like they’re either non-existent or non-obvious for most SaaS companies.
Do you feel like you’d trust a company’s API more if you could peruse the API changelog more easily?
That’s fair! From my perspective that’s something that’s building trust with the SaaS provider. We’ve all heard, “ugh! This API is so shitty!” before — I’m fascinated by what tools and products can be built to prevent developers from feeling these pains.
Thanks for the feedback. Can definitely see that changelogs are really underinvested in across the industry, and it’s useful to have people mention how valuable they are.
Your presumption of a collective "we" is at the root of the problem: versioning is one of those things in development where the same word can mean very different things to different people (see also: agile, automated testing). Prior to semantic versioning, there was an approach that looked like an agreed-upon standard to consumers of software packages/ libraries/ APIs but was not.
On one extreme you had certain developers (who I am sympathetic to because it's my personality) who were loath to ever label something 1.0 because it implies Doneness and a freedom from bugs that can never be so. On the other side you had people at chop shops who would bump the major version of some boxed product every time they fixed two bugs. Even if you as a good developer did the research to discover that 0.7.6 of Package A was solid and 5 years older than Package B 7.6, there was a good chance someone above you would declare 7.6 > 0.7.6 and that it was paid-for software so "We can get support from them" and force you to work with a shittier product.
So "we" developers do what we almost always do: we cast about for a better solution and landed on one that made the meaning of version numbers opaque to anyone outside the guild of mages writing code.
In short, semantic versioning may suck hard, but it sucks less.
Honestly, I don't see what a non-semantic scheme brings to the table. In SemVer, you can (theoretically) infer compatibility from the version number. A date tells you nothing.
(Disclosure: I work at Google on public APIs, opinions are my own)
Google's proposed a "stability" semantic as a third option[0]. TL;DR no breaking changes in the Stable channel but you can add backwards-compatible[1] features in-place.
A permanent Beta channel that's a superset of Stable lets users choose how change-tolerant they are. This lets API producers launch features earlier, knowing they will only impact risk tolerant users if breaking changes are needed. Theoretically this reduces the need for breaking changes in Stable, which require a new Major version.
I want to love Shopify but Shopify doesn't want to love developers. For example when their multi-location offering [0] _in beta_ they also announced that the Inventory API is going to breakingly change in 2 months and _none_ of their own SDKs supported locations at that point. This has happened with Shopify time and time again. Shopify doesn't manage API versioning or breaking changes: it just forces developers to update or endure with their broken applications and interfaces.
Sure, it's Shopify's choice. But considering how long their own changes take (for example multi-language is still in some sort of beta and it's been up and coming for like 5 years?) the API cycles are just brutal. And the saddest thing is that Shopify is still the best managed e-commerce platform for most usecases.
How do they handle security fixes in old releases? If releasing a security patch requires backporting it to 5 different active releases then I'm unconvinced that this is a useful strategy.
Great question! As mentioned in the article, when making a change you would typically add an `ApiChange.in_effect?` check to see if your new functionality should execute. When we implement security fixes, we do not include this check and the fix retroactively applies to all API versions.
At that point, I would use something like Gatsby which would build static pages for each of your product pages. You could host that on S3 with a CDN for cheap. You could downgrade the Shopify plan to $9/month since you're not going to be using their storefront.
thank you, but this doesnt solve the speed issues in my case. i could use sylius or any other ecom that i could host myself and use custom speedup on code instead of just on cdn:) shop i mean is in uk and has only uk customers, no need for cdn as customers arent spread globally or even on the continent.. as for static wouldnt that work for every solution? in that case theres no need for shopify at all... but as you say i could even keep static code generated and served from redis instead of s/hdd, that would be even faster
The main power of Shopify is the dashboard and the APIs. By having a custom storefront, Shopify just provides the APIs and a nice dashboard for non-programmers to use. You can also ditch Shopify later on since the storefront just builds off their API. Just my 2 cents. For $9/month, hosting something like Magento or even Woocommerce would be difficult.
As far as I understand, the Stripe api would continue to work indefinitely so long as you lock your api version, whereas Shopify would eventually break the app as they essentially backport breaking changes to older api versions.
Initially, I thought Stripe's method was superior, and would provide the best API experience, but realized that Stripe and Shopify have different incentives w/r/t their api.
For Stripe, breaking a functioning site harms revenue, and generally the developer is the Stripe customer.
For Shopify, their customer is the store owner, and for the most part, the store will continue to function because that is mostly controlled by shopify. The developer api is for added functionality, and it is in the interest of Shopify and the merchant that those apps continue to be updated and utilizing the latest features.
So, two different ways of managing breaking changes, but both are ultimately centered around providing the best customer experience.