Absolutely agree on the performance front. If you're starting out, generally SQL Server will run well out of the box, and if you upgrade hardware and reinstall on the new hardware it'll generally work well without having to do anything else. At some point you might have to start thinking about I/O and how best to arrange files but maybe not for a while, depending on what you're doing. (Just to be clear, let me restate: I'm talking starting out here - this is obviously NOT the approach you'd take when building or upgrading a large, complex application - I'm talking basic CRUDdy type apps here.)
Contrast this with some of the NoSQL platforms where immediately you're forced to think about topics such as sharding and clustering (I'm looking at you Couchbase) and the pain in the ass factor is much lower.
Key point: you don't need much of a clue about SQL Server administration to start using it, but you perhaps do with Couchbase, Elasticsearch, etc. From an ease of use point of view with NoSQL, I'd probably point to MongoDB as having the most approachable learning curve.
There's also no BS about having to keep everything in memory to keep performance acceptable (I'm looking at you, again, Couchbase - that's two strikes). The whole point of SQL Server and other RDBMSs is that they continue to work well when the amount of data you need to store vastly outweighs the amount of physical memory you have. Of course, if you can get everything into physical memory it will perform better but, even if you can't, it will still perform well.
Of course, if you're using SQL Server you do need to learn SQL (true for any RDBMS), which can be off-putting. It's absolutely not a beautiful language - in fact it's bloody aggravating when you write a lot of it - but it is extremely powerful and excellent for working with sets of data arranged in rows (i.e., tables). Nothing else really comes close.
Here's a couple of pain points from a dev that doesn't know any other database:
1) Lack of deferred constraints. To me it's natural to think that database constraints shouldn't apply while you're shuffling things around during a transaction. Other databases support this, apparently
2) Lack of multiple cascade paths. Why should I have to implement this functionality in triggers?
3) Why can't you create a column in the middle of a table? I know it has something to do with the column order being tied to the actual physical storage order, but why are they connected in the first place? Just show me the columns in the order I specify, I don't give a damn what order they're actually stored in (or should I?)...
AD 3) The typical answer is "why would you ever want to insert column in the middle of a table"? Logical order of columns is not connected with physical storage. MSSQL will physically arange columns in an order which ensures most effective perfomance (generally smallest data types at the beginning, largest at the end). Microsoft could make it so that new column is being inserted into any place in table definition, there just isn't any real reason to do it. You have to specify column order in select statement anyway so.. (SELECT * is bad bad practice for anything serious)
EF will deal with much of this for you because it uses "empty" rows so FKs references can always be satisfied during a transaction, but this obviously slows things down.
With (2) I'd question what you're doing. I'm not saying absolutely don't use them, or that what you're doing is "wrong", but TRIGGERs often result in side-effects that cause problems further down the line. They can be OK for auditing (but consider other options such as event sourcing if you have this kind of requirement), but the moment you start putting business logic in them, or hanging it off data modified by a trigger, you've started down a potentially dark path. The reason for this is that suddenly DML operations have side effects that may not be anticipated by people working with the database in future. In fact, depending upon how permissions are set for developers, they may not even be aware that triggers exist, leading to unexpected results, "weird" performance problems, etc. I've run across this recently in a moderately sized client database: ~500 tables, ~2500 stored procedures, and a few dozen magic triggers that most people are unaware of, which manifested themselves when we collected execution plans for some query tuning ("Where's all this stuff coming from? ... Oh.").
With (3) I'd just be interested in hearing your use case.
Contrast this with some of the NoSQL platforms where immediately you're forced to think about topics such as sharding and clustering (I'm looking at you Couchbase) and the pain in the ass factor is much lower.
Key point: you don't need much of a clue about SQL Server administration to start using it, but you perhaps do with Couchbase, Elasticsearch, etc. From an ease of use point of view with NoSQL, I'd probably point to MongoDB as having the most approachable learning curve.
There's also no BS about having to keep everything in memory to keep performance acceptable (I'm looking at you, again, Couchbase - that's two strikes). The whole point of SQL Server and other RDBMSs is that they continue to work well when the amount of data you need to store vastly outweighs the amount of physical memory you have. Of course, if you can get everything into physical memory it will perform better but, even if you can't, it will still perform well.
Of course, if you're using SQL Server you do need to learn SQL (true for any RDBMS), which can be off-putting. It's absolutely not a beautiful language - in fact it's bloody aggravating when you write a lot of it - but it is extremely powerful and excellent for working with sets of data arranged in rows (i.e., tables). Nothing else really comes close.