Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with this approach is that it requires the system doing randomization to be aware of the rewards. That doesn't make a lot of sense architecturally – the rewards you care about often relate to how the user engages with your product, and you would generally expect those to be collected via some offline analytics system that is disjoint from your online serving system.

Additionally, doing randomization on a per-request basis heavily limits the kinds of user behaviors you can observe. Often you want to consistently assign the same user to the same condition to observe long-term changes in user behavior.

This approach is pretty clever on paper but it's a poor fit for how experimentation works in practice and from a system design POV.



I don't know, all of these are pretty surmountable. We've done dynamic pricing with contextual multi-armed bandits, in which each context gets a single decision per time block and gross profit is summed up at the end of each block and used to reward the agent.

That being said, I agree that MABs are poor for experimentation (they produce biased estimates that depend on somewhat hard-to-quantify properties of your policy). But they're not for experimentation! They're for optimizing a target metric.


Surmountable, yes, but in practice it is often just too much hassle. If you are doing tons of these tests you can probably afford to invest in the infrastructure for this, but otherwise AB is just so much easier to deploy that it does not really matter to you that you will have a slightly ineffective algo out there for a few days. The interpretation of the results is also easier as you don't have to worry about time sensitivity of the collected data.


You do know Amazon got sued and lost for showing different prices to different users? That kind of price discrimination is illegal in the US. Related to actual discrimination.

I think Uber gets away with it because it’s time and location based, not person based. Of course if someone starts pointing out that segregation by neighborhoods is still a thing, they might lose their shiny toys.


Indeed, we are well aware.


You can do that, but now you have a runtime dependency on your analytics system, right? This can be reasonable for a one-off experimentation system but it's not likely you'll be able to do all of your experimentation this way.


No, you definitely have to pick your battles. Something that you want to continuously optimize over time makes a lot more sense than something where it's reasonable to test and the commit to a path forever.


Hey, I'd love to hear more about dynamic pricing with contextual multi-armed bandits. If you're willing to share your experience, you can find my email on my profile.


You can assign multiarm bandit trials on a lazy per user basis.

So first time user touches feature A they are assigned to some trial arm T_A and then all subsequent interactions keep them in that trial arm until the trial finishes.


The systems I’ve use pre-allocate users effectively randomly an arm by hashing their user id or equivalent.


To make sure user id U doesn’t always end up in eg control group it’s useful to concatenate the id with experiment uuid.


How do you handle different users having different numbers of trials when calculating the "click through rate" described in the article?


careful when doing that though! i've seen some big eyes when people assumed IDs to be uniform randomly distributed and suddenly their "test group" was 15% instead of the intended 1%. better generate a truely random value using your languages favorite crypto functions and be able to work with it without fear of busting production


The user ID is non uniform after hash and mod? How?


If you mod by anything other than a power of two, it won't be. https://lemire.me/blog/2019/06/06/nearly-divisionless-random...


That article is mostly about speed. The following seems like the one thing that might be relevant:

> Naively, you could take the random integer and compute the remainder of the division by the size of the interval. It works because the remainder of the division by D is always smaller than D. Yet it introduces a statistical bias

That's all it says. Is the point here just that 2^31 % 17 is not zero, so 1,2,3 are potentially happening slightly more than 15,16? If so, this is not terribly important


> If so, this is not terribly important

It is not uniformly random, which is the whole point.

> That article is mostly about speed

The article is about how to actually achieve uniform random at high speed. Just doing mod is faster but does not satisfy the uniform random requirement.


If your number of AB testing combos cohorts is fewer then 100 then yeah this passes for being uniform


It doesn't, mathematically. It might be good enough for some cases, but it is not good enough for cases that actually require uniformity.


additional to the other excellent comments they will become non-uniform once you start deleting records. that will break all hopes you might have had in modulo and percentages being reliable partitions because the "holes" in your ID space could be maximally bad for whatever usecase you thought up.


Just make sure you do the hash right so you don’t end up with cursed user IDs like EverQuest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: