Hacker News new | past | comments | ask | show | jobs | submit login

I think one of the reasons that people think that trying something out leads to permanence is that there's a pretty long established trend of that being the case on websites that HN shares membership with. You may be paying the social price for actions inflicted by Digg and Reddit, years ago.



That's quite interesting. What would you suggest by way of differentiating from this, if we decide to do something like this again? I don't mean something like a no-politics week, I just mean some short-term variation with no intention of permanence. I'd hate to give up our ability to try out ideas.


I don't think there's a short-cut to differentiating yourself there: you just have to run experiments and not use that language to roll out new features (ie, never make an experiment permanent at the end, but roll it back for a few days/week/month, discuss it, and then redeploy). I would actually trust a website that made too big a deal about how they weren't like those other guys less, because that's an old marketing trick.

Same way you build a reputation for anything.


Maybe add a notice to the site header listing currently-running experiments. Apart from making it easier for people who don't read every post to keep up with rule changes, that'd emphasize that there's something out of the ordinary going on, and not the new normal.

PS: How do you feel about adding a "rationale" text box to the flagging process + some eventual feedback on whether moderation agrees with the flag/rationale? I basically never flag comments because I'm not sure I'm the same page with y'all.


> What would you suggest by way of differentiating from this .... I just mean some short-term variation with no intention of permanence

Add a date and time, down to the second. The more exact the end time the more people will believe it really is temporary.


I wanted to see the criteria that the experiment was going to be evaluated on. I feared that since the metrics for the pro side would be more readily available (like measuring how many fewer flamewars there are or how they quickly get flagged instead of sprawling through the comments) than the "it isn't a good change side" (fewer meaningful discussions or a subtle bias towards one side in an issue), that the experiment would provide all the evidence needed to extend the ban. I know I am also really sensitive and jumpy around things they perceive as censorship, even for a week.


How about politics friday's or no politics Ndays. Where the rule is in general no X in general unless its something like "politician bans encryption" type stuff.

That way if people want to do politics, fine, but if we restrict it to a specific day or days so those of us that are sick of politics can just focus on getting things done. And then post about what we did on the day after.

Maybe politics day on, no politics day off in a series.


I think we learned enough here to know that there's no way that will work, and also that it wouldn't make the site better.

Edit: I'm afraid that sounded dismissive—sorry! I was writing in haste and genuinely appreciate your suggestion.


I think there will always be resistance to change, so perhaps there should be a visible indication that some idea is being tested out, with an option for users to disable it.


That works for software changes but not community standards, which is what we were experimenting with here.

Perhaps the lesson is simply: don't experiment with community standards.


I mistakenly thought of the change as a software change, since I've yet to use the flag button.

It might be useful to tie changes in community standards to something concrete and trackable. Taking the detox week as an example, there would be a way to flag something and specify "detox week" as a reason.

That being said, can you give an example of changes in community standards on this site which effects cannot be tracked in software?

EDIT: Clarified scope of "changes in community standards" to this site.


> Can you give an example of changes in community standards on this site which effects cannot be tracked in software?

Story quality, thread quality, community satisfaction...


I'd hope that at least some experimentation is still possible. Wouldn't a lack thereof lead to stagnation? I have a hard time believing that, as great as it is, HN has found the global maximum of community design.


Experiment, please, but wisely and with clear communications.

The 'Net needs more models tested, but also far more nuance than most sites seem to show.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: