Experience w/Chargebee and it's quite robust. You'll still need a payment gateway Stripe/Braintree/Paypal as they don't do CC processing (CB does integrate with them), but if you need subscription management, coupons, etc. there's a lot that CB does.
There was nothing wrong with csproj, apart from the MSBuild verbosity baggage. JSON's a terrible format for anything like this due to lack of comments and visual noise. (The mandatory quotes all over the place makes it hard to read.)
But I can't imagine it is hard to convert between them. Most .csproj's are basically a list of references and a <Compile><Include>*.cs.
While some people get a bit cargo cult about things there is a rough consensus that XML is good for some things and terrible for others whilst JSON is the exact reverse ;-)
I personally rather like YAML for anything intended for humans to read and edit.
If XML didn't have those idiotic verbose closing tags, this probably wouldn't even be a discussion. Or if JSON had comments and didn't require quoting all keys.
What's funnier is seeing the HTTP/JSON folks re-invent SOAP. But maybe this time it'll actually be Simple.
+1. To my knowledge Discord is and will always be intended for gamers for the forseeable future. They don't have the infrastructure to support enterprise expectations like LDAP/SSO, dedicated account managers, support hotlines, guaranteed SLAs (although I know their tech team is highly competent), etc.
1pw user here and long time AD architect. If you have any questions around AD/LDAP, I'd be happy to answer what is 'common' when dealing with AD-integrated solutions.
I think this has potential, but there are so many usability issues that it's quite difficult to use effectively.
A few examples (only talking about laptops):
-Labels on the sliders are unclear. If I select 4GB of RAM, are the results showing only those laptops with 4GB of RAM? or those that are >= 4GB?
-No distinction between configuration options & laptop models. On first load, it appears that there are 100+ models to choose from, when in reality it's probably much less with several configuration options.
-Possibly Bad assumptions that will affect data integrity. Amazon is not a reliable source for laptop specs or models because many products are fulfilled by 3rd parties that sloppily input specs and are inconsistent on where details are located. For example, the model number might be in the title, details, technical specifications, or Q&A.
-Odd defaults. Why do I start out looking for laptops with 1GB of RAM & 16GB HDDs ?
I'll check back periodically and keep an eye on this. My first thought is to take the best pieces from newegg.com and emulate those as they got filtering right in many ways.
My $.02 is focus on getting the experience / specs reliable and THEN add pricing. You might be spreading yourself too thin trying to tackle pricing as well.
The stats that would be hit the most obviously would be bandwidth, followed by RAM.
Storage requirements would change of course, but
bandwidth would be absorbed by whatever CDN they're using, so traffic quantity stat would be different.
Also, with images/videos, they're now sending more URLs/embeds/etc. along with data, so more characters = more RAM for caching and/or serving up on web servers. In theory, shouldn't be much more of a hit.
It's probably worth noting a few things about StackExchange's uniqueness for those that may not know.
StackExchange runs on physical hardware and they have spent a considerable amount of time optimizing for bare metal performance. Their team is unique in that they embraced hardware from the start vs. many teams today that want hardware abstracted away. They have more maintenance overhead around hardware management, but don't experience lost IOPs due to IaaS (AWS/Azure) overhead.
Their current environment required overcoming tremendous technical hurdles on earlier versions of SQL Server (these might be general RDBMS limitations as well). Luckily they were able to get the Microsoft SQL team engaged to get them through this.
Finally, their team was world class. BrentOzar, SamSaffron, MarcGravel (and others) are highly respected members of the SQL & .NET community.
It's easy to look at their setup and say "Wow, that's not a lot." and overlook the circumstances & talent required to achieve such an efficient system. I'm not sure many teams would pursue this architecture if they knew the effort (and luck) involved.
> Their current environment required overcoming tremendous technical hurdles on earlier versions of SQL Server
Jeff Atwood's goal was always "performance is a feature," and that means rendering your pages faster than anybody else's. With that in mind, you're always going to have technical hurdles to overcome on any platform - because you want to get faster results than anybody else is getting.
I remember when I first got involved, and Jeff told me something along the lines of, "This slow query runs in ~500ms, and we want it to run in ~50." That statement alone is a huge leap over what a lot of RDBMS users say - usually when people refer to a slow query, their unit of measurement is whole seconds, not milliseconds. They were serious about performance from the get-go.
> Luckily they were able to get the Microsoft SQL team engaged to get them through this.
Nothing against Microsoft - I love 'em dearly and make a great living with their tools - but Stack's success is much more due to their own internal team's dedication to performance. When we open bug requests with Microsoft, the time to resolution is typically measured in weeks or months. During that time, Stack Exchange's team has to come up with creative, impressive workarounds. They're the sharpest SQL Server users I know.
> Finally, their team was world class. BrentOzar, SamSaffron, MarcGravel (and others) are highly respected members of the SQL & .NET community.
Awww, shucks, but I'm not brown-nosing Stack when I say that their current team is ridiculously good. Their Site Reliability Engineers know more about SQL Server than most career DBAs I know.
> I'm not sure many teams would pursue this architecture if they knew the effort (and luck) involved.
It sounds like you're implying that other architectures are faster by default and with less effort, and I would disagree there. I haven't seen any big sites (big being top-100-in-the-world[1] type stuff) where the persistence layer was set-it-and-forget-it. Scale - especially millisecond performance at scale - is seriously hard work on all platforms.
Thanks for the reply Brent, a few points I should clarify.
> Luckily they were able to get the Microsoft SQL team engaged to get them through this.
Looking at this statement now, I see that it might have appeared that Microsoft 'came to the Stack team's rescue.' My impression was that over time Microsoft alleviated some of the workarounds the SO team was running into.
> I'm not sure many teams would pursue this architecture if they knew the effort (and luck) involved.
I was simply stating that the SO architectural graphic looks deceptively simple and I can only imagine the amount of drama on hardware alone the team went through to achieve their goals. I do believe there are other OSS-based architectures (with likely more layers) that would require less 'workaround' effort to deliver reasonable performance/reliability, but compounded with SO's likely SLA & perf requirements on a closed-source RDBMS? It wouldn't surprise me to see some go 'good enough' and move on. I doubt anything is Ron Popeil easy when your trying to shed milliseconds on a persistence layer.
We've only gotten 2 that I can remember in 4 years here. One was a regression in .Net 4.0 with high load hashtable inserts (which was previously fixed in 3.5 SP1) that they had to re-fix. The other was a SQL server issue in 2014 CTP1 we were running with an availability groups edge case on high volume/high latency secondaries.
Unless we're testing beta software, the MS stuff generally works and works very well. We of course work with them on feature requests that will make life better - some happen, some don't. I'm on the opinion you should try and have this relationship with any vendor or open source project. We're trying to make what we use better, and not for just us.
This article was pretty interesting to read, but unfortunately drew one incorrect conclusion...
"I guess my point is, that if you want to become a programmer, you have to be comfortable with having to learn new things constantly for the rest of your life."
This conclusion implies the rate of learning experienced in those 6 months would be the same forever, and I think that's not entirely realistic. While technology is rapidly changing, it's FAR more sustainable to maintain knowledge across a wide variety of areas than continue at the pace and breadth the OP experienced.
Also, it depends on your field/focus. If you're talking about web programming, sure. If you're a C++ app programmer, chances are your world isn't changing too dramatically each year.
As a Chicagoan trying to build a social network, I can say that this article hits a little too close to home; our ecosystem is completely dysfunctional. Go to a start-up event, and you'll see mostly service providers and FTEs that aren't willing to get engaged in anything new unless there's an income attached to it. Talk to investors, and they're mostly looking for traditional income streams or cashflowing properties. Any tech talent is either still in college (not terribly useful), or happily employed at corporations and NOT looking to change their disposition. It's been an ongoing struggle.
I worked in Santa Clara for 1.5 years and I love the Nor-cal energy, and while I there are plenty of pluses to Chicago (and the Midwest) in general, I must agree that a Twitter or FB would probably never have been successful here. For any Midwesterners reading this that disagree, I would ask if they've ever tried to rally Midwest resources around a project that isn't creating revenue within 90 days of launch.
Btw, if your an HNer in Chicago, or a tech/design resource interested in chatting let me know :)