Hacker News new | past | comments | ask | show | jobs | submit login
Considerations When Planning Endpoints for your RESTful API (apievangelist.com)
47 points by apievangelist on Oct 19, 2011 | hide | past | favorite | 21 comments



I find it odd that on the one hand, this site is about following standards and best practices, but at the same time violates some other fundamental web standards.

For instance, instead of using a well-established example domain name like example.com [1], they misuse the existing domain yourdomain.com which currently belongs to Neon Network LLC.

[1] see RFC 2606, http://tools.ietf.org/html/rfc2606


A true REST purist would say that you shouldn't put versioning in the URL. Use different Content-Types for different versions of the API, and use Accept headers to indicate the version the client is using.


The one thing I would tell people implementing an HTTP API; create a non-trivial sample application using the API.

The benefits of this are many:

1) You will create a client library for your API in at least one language. You can release this along with the API which will make it easier for people to adopt the API. It will also serve as an example of best practices for interfacing with your API for other client library authors.

2) You will be forced to really think about what representations people using the API will want to consume. Too many people expose an API that is just a wrapper around their data models. Your data models are rarely structured in an appropriate way: i.e. a blog_post might have a user_id, but most API users would appreciate being passed some form of a user representation there, rather than make another API call.

3) The application serves as an end-to-end test of your API (though obviously isn't sufficient in terms of testing)


I would go one further: build your API first, and build your application on top of it.


Agree completely with Adamj here - put versioning in the headers. And to be honest, the advice here implies you can as a newbie product manager design a rest api without a hardcore (presumably experienced) rest developer - and throw it over the wall to the developers (who will thank you). I cannot ever imagine that working.

Some recent posts on why no-one gets rest (to be looked up) are much more useful, but even so I have not yet found a good guide to rest style - even the oreilly book was disappointing.


I think ReST as a guideline rather than a strictly adhered to ideology makes a lot more sense. I'm really not keen on APIs that contrive natural actions into creating encapsulated resources. While they may be equivalent, it feels like changing the problem domain to match the technology with no clear gain.

Similar deal with the whole versioning thing. I'm not suggesting we can't improve, but putting versioning in the header seems to solve a problem I've never had while complicating a lot of other things. Checking with curl becomes more complicated, checking which version of the API you're targeting becomes more complicated, and looking up documentation (which is oddly lacking for a lot of ReST APIs) is a hell of a lot more complicated.

I'd rather design APIs how I'd actually use them in practice, not what makes them theoretically more "correct." And I'll take a locked-in version that ships today over waiting for the ideal API that's still in the works.


I hypothesize that part of the reason a full REST(w/HATEOAS) isn't popular (maybe "popular" isn't the right word here. maybe "pervasive"?) is the increase in work to move from a Type-I to a REST interface [1]. This is not insignificant in terms of up front costs. Longer term being more restful might indeed ease refactoring and versioning, but by then you may have more help working on the api (after growth). In addition then is of course down the road and things fall by the wayside.

When I try to think of some of the nicest APIs I have actually used, I don't recall them being 100% REST compliant.

I am still not sold on putting versioning in the headers. Url based versioning has the benefit that the version being used is readily apparent (visible), as well as easing scalability due to the ability of frontend proxies to route based on url partials. The pragmatist in me says url versions are 'ok'. Maybe not the best, but as a trade-off "good enough" if it makes implementing them easier.

[1]: http://nordsc.com/ext/classification_of_http_based_apis.html


Having versioning information in the headers should be as obvious as having it in the url when you are writing code that interfaces with the rest api.

Url based versioning is only more visible if you are accessing the api through a browser, in which case it's probably fine for the api to return the latest version since humans are pretty good at making sense out of new representations.

I'm not sure I understand the benefit of url partials based routing. A REST api should be easily loadbalanced using a simple round robin setup since no state lives longer than a single request.

Edit: Ok yeah, url partial based routing will allow you to implement v2 as an entirely new system.


Your reasoning for having the api at a subdomain applies for versioning is well. Imagine v1 of the api is written quickly in PHP. As the needs of the company change, you may need to iterate on the api and build v2 with Java. Managing versions by api.example.com/v1 and api.example.com/v2 would be difficult. Here are two suggestions: v1.api.example.com or v1api.example.com.


Routing on the path is trivial for many different proxies.


Yes but you still need a proxy. With versioning in the subdomain you can easily route requests for different versions to completely different systems. That flexibility might come in handy one day.


In how far is this less effort than the proxy? In the one case you add a line to your web server config, in the other case to your DNS server config.


Why not put the version in the domain name? v1.api.example.com looks legit, and also implies that the api versions are indeed handled by different subsystems.


Not sure why would you recommend SSL from day one. If the API needs to entertain high throughput and low response time, adding SSL means adding overhead.


Avoiding SSL on the grounds of its overhead is premature optimization. Unless profiling reveals that the SSL overhead introduces significant delays (and the cost of getting a proper SSL Certificate is affordable) there's no reason to go without SSL.


If you want to use caching for multiple clients; I don't think you can use SSL. So it depends a lot about the purpose of the API.


it's all depending on use cases.. that is why I don't think an API should be secured through SSL from day one.. here is an example how SSL can be unnecessary, if i am processing billions of ad request daily with a response time of less than few milliseconds, and operates in a secured environment, why would i need to add a layer of SSL on all the requests..

Use cases drive requirements, not buzz words drive requirements, and that's my whole point


Sweet. Great discussion everyone...just what I was wanting. I will gather and update post with everyones recommendations. Good shit.


I don't know if there is a better way to do this, but when creating our own API we also added a $_GET['_method'] variable with 'GET|POST|PUT|DELETE' values. This was basically to override the HTTP request method. This gave us a lot of convenience calling the Rest API from flex/flash which does not support PUT/DELETE request types yet.



ok thanks! still curious about the reasons why this hack is better than the one I've stated above? As I can see they're both using GET instead of PUT/DELETE and both adding extra parameters (one is using header, other is using request). Why would this be any better?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: