I understand that curtailment is needed to incentivise private businesses to invest in wind when the output and demand can’t be correlated, but if the government owned the wind farms then it wouldn’t matter if we wasted right? We could just always be overproducing and wouldn’t have to pay for it.
Depends on what you mean by overproducing. The energy put into an electrical grid must be balanced by demand or bad things will happen. I think the second answer in the below StackExchange is a good description.
Assuming a competitive market, the outcome is essentially the same right? If the government builds more than would be economic for a private company they're paying the extra through construction costs/maintenance/financing that they would have been paying to incentivise the extra turbines.
Nope, the difference can be found in the profits made by the company that does in fact own and run the wind farms. The government could capture that should it wish to build them itself. This has been a hot topic recently with regard to fossil fuel energy generators who have been making large profits (in the billions) at the expense of people's energy bills.
Except if the government owned it then there is no profit motive to begin with. At one point the number of intermediaries does start mattering (though I imagine that power suppliers are lower margin than other businesses)
There are a lot of details about... I suppose organizational theory? Which makes the decentralization nicer. But profits come from somewhere
Maybe not: "The Transport Secretary announced on 19 October 2022 that the Transport Bill which would have set up GBR would not go ahead in the current parliamentary session."
The actual railways (that is, the tracks and the stations) are already government owned anyway (Network Rail).
Network Rail sells access to the network to train operating companies, which are private (though often state-owned by other countries).
The network was originally built by private companies until nationalisation in 1947 (railway companies were bankrupt after WW2). It was private for a while in the 90s, then went bankrupt and renationalised in 2002. Seems to be quite the money pit!
Despite what corbynites tell you the problem has never been privatisation or the franchise system - certainly not the TOCs. Indeed the system has managed to take Marylebone and the Chiltern main line from near closure under government control to providing massive investment and high quality thanks to long term franchises. The competition has lowered prices dramatically for those that care (in 1990 a 3 hour return Manchester to london cost about 3 times the £45 price it does today, but today you also have the option of a 2 hour return on a faster service, the revenue of which subsidies the rest of the network), and has driven usage to record levels arresting massive declines under BR
I don't have a strong opinion personally about the franchise system as I don't use any UK rail. My gut tells me they're not adding any value and they might as well be nationalised but someone whose opinion I trust (rail engineer and YouTuber Gareth Dennis[0]) has said that ditching them and nationalising it entirely wouldn't really fix what people think it would. However it has to be said that TPE and AWC have stood out as particularly dismal services - AWC were found to be fucking around with their already disappointing stats around cancelled services, for example. Hence my comment about users of those services - I would completely understand if they would want an overhaul if not outright nationalisation.
[0] - interestingly his "RailNatter" this Wednesday was titled "How to fix Britain's broken railways". I haven't watched it yet, but it will certainly feature some good insight: https://www.youtube.com/watch?v=CmKhVjw1xDA
Yes both of those franchises are failing, and if they aren't meeting their KPIs then they should have the franchise stripped.
Some of that is infrastructure (the cancellation of platform 15/16 at Piccadilly means the new Ordsall chord is basically useless, but they tried to use it anyway), I don't know enough about TPE to fairly attribute it, but with AWC it's franchise operation -- especially staff availability. Some of that is also government interference.
Why the left think a tory government would be any better running services than the train operators is anyones guess. When you dig down to it they seem to want more tax subsidies to big businesses (the ones who pay the £400 first class peak time returns on Manchester-London) and high income commuters (the ones with 50% discounts via season tickets who cause peak problems in the same way peak is a problem on the electric grid, and who typically earn far more than the average UK person who commutes via bus or van/car)
Fortunately the franchise system means many lines have significant competition, and you can choose based on journey time, price, and reliability.
Where privitisation does have its weakness is the financing of rolling stock.
I’ve no idea where you get the idea that “the left” (an enormous and diverse block of people) primarily want subsidies for the rich. It seems similar to the same (IMO bad faith) argument American conservatives made about student debt forgiveness - that because a small amount of wealthy people would benefit from a universal thing, it is therefore wrong.
The popular opinions I have seen are:
- “nationalise the railways”
- more frequent, reliable and cheaper services overall
As discussed, nationalising the railways isn’t necessarily the silver bullet many people think, but if you engage with those people and don’t insult/berate them they’ll come round easily. They're not hardline communists, hellbent on the destruction of private companies - they just want better train service somehow and may not fully understand how to get there. That's not to deny the existence of "tankies" and other weirdos, they're just a very very tiny minority.
HS2 should enable the “more frequent” part over the regions it covers. I don’t know how to make services cheaper or more reliable, I imagine subsidies come into it somewhere though, and this inevitably means that yes someone wealthy at some point will benefit from a cheaper rail ticket.
> if you engage with those people and don’t insult/berate them they’ll come round easily
Nope, 10 years of plain simple facts and it doesn't help. It's still Richard Branson that's stealing everyones money, if only the west coast mainline wasn't run by him then it would be £20 return for Manchester to London. HS2 of course will apparently cost £600 return for every journey and nobody will be able to afford it or something.
HS2 should be cheaper than current trains, if there is the demand.
Currently to run 1000 seats London to Manchester return takes two 11 car trains, each with 3 members of staff (driver, manager, shop) on a 5 hour return trip. That's 10 hours of train and 30 hours of staff per return.
To do that under HS2 will be 2h30 return for a single train and not need a shop, so that's 5 hours of staff and train costs, so should be far cheaper operational costs.
Track costs should be far cheaper than maintaining 150 year old structure
Whether those lower train and staff costs translate to lower fares, lower subsidies, or more subsidies elsewhere on the network, is a political decision.
I fear the government is killing demand though - for 2 of my last 3 trips to London down the west coast I've hired a car and driven, and it wasn't terrible.
Why build and maintain the entirety of the infrastructure for a national transport system: payment, timetables, rails, signalling etc. and then hand the very last bit - the only bit that actually generates revenue - to a private company?
It's just another example of the hubris of the Conservative party. We've seen it play out repeatedly over the last decade and even earlier in Thatcher's neoliberalism. Labour's lurch to the right resulted in displays of similar small minded arrogance. Their undermining of the NHS through piecemeal privatisation is nothing short of a crime.
He didn’t achieve 2MH/s on his MacBook. That was the authors estimate of what you could achieve on a multi-GPU setup, it was only around ~1KH/s on the MacBook.
They do have stuff inside them though, so I'm not suggesting they make it heavier but having the required electronics does take up space that can be otherwise utilised.
I don't know if it's just me but hamburger menus that appear/disappear on hover seems very frustrating to use for me, it's especially obvious on this where the font size is so small.
I find it fiddly to delete/mark as completed as the slightest missed movement means I have to start again.
I'd normally agree, but here the overall look and feel is meant to be that of a read-only list. Note-level menus aren't something that gets used to often, at least in my case, and having them visible at all times just makes the whole thing look like a field of hamburgers.
For general use it seems as if it'll be something that makes no difference to my day-to-day usage of my computer other than a warm fuzzy feeling that the underlying protocol is "right".
That's true - the two things I really like are no screen tearing ever (be that scrolling Firefox or YouTube videos) and that warm fuzzy feeling.
Edit: I also like Sway quite a bit (over i3/X) - its configuration (outputs, input devices, etc) makes a lot more sense and is a lot easier for me than trying to change stuff in different places and in different ways with X.
Highly recommend the book 'Wisdom of Insecurity' by Alan Watts. This article touches on the search for an immutable self but it doesn't quite make the leap that there isn't one and that the advice to "be yourself" is never going to bring any psychological satisfaction.
I'm not sure about you, but most people with js "disabled" don't browse with javascript disabled entirely, but instead use a whitelisting/blacklisting plugin, otherwise they don't be able to access many essential sites (eg. banks). under this setup, whitelisting google isn't going to decrease security unless you think they're going to serve a 0day when you sign in.
Passwords can be hashed directly client-side with javascript, which is way more secure than sending them clear on the wire, so i dont disagree with Google's stance here and dont understand the hate
Hashing passwords client side has no benefit if a site uses HTTPS.
If a site uses HTTP, then hashing the password client-side and sending it up to the server is equivalent to sending a clear text password. If an attacker can already read your traffic, what is stopping them from using your password's hash to log-in to your account?
It stops them from using the password to log in to your other accounts.
It stops a compromised server from silently leaking unhashed passwords.
It makes password hashing user auditable.
You could even do a call and response model to stop the hashed password to log in at all. Here is a primitive scheme for such a model (public key crypto probably enables more clever schemes, not sure):
- Upon signup, generate hashes of "$password$site$i" for i in 1 to 1000. Send these to the server and have the server hash them again.
- Upon login, after the user has entered their password into the box, send an integer from i from 1 to 1000 to the browser, have the browser send back the hash of "$password$site$i".
Now a compromised hash can only let you log in 1 time in 1000. Combine that fact with the other available signals for "is this who we think it is" and you should be able to reject people who stole the hash reasonably reliably. Meanwhile since you are still hashing the password on the server (again) you have lost literally nothing but a tiny bit of computation time.
Use a password manager and don't reuse passwords. If your randomly generated, unique password has good enough entropy then why go through all of the trouble of the rest of the client side hashing?
There's nothing stopping you from hashing your own passwords client side and sending your bcrypt hash up to the server except some sites still truncate the passwords to 32/16 chars etc.
When you have the need for the level of security, client side hashing will not be as good as dedicated HSMs that many services now use on authentication.
Writing your own crypto flows can be extremely dangerous as you open yourself to all kinds of side channel attacks.
A password manager is a client side method that only works for people who opt into it, Google needs to deploy a server side method. Likewise with hashing my own passwords client side. HSMs.
As for writing my own crypto. Indeed, if anyone actually used the scheme I suggested they would be making a mistake. I wrote it not to be used but to demonstrate that we can do better in an easy to understand way. Unlike me, Google has the resources to read the papers, do the math, carefully implement this, and do it properly.
Keywords for how to do it properly include "zero knowledge password proof" and "password authenticate key exchange".
PS. It's irrelevant to this conversation, but putting all my passwords into one program has always struck me as a monumentally stupid idea. I use one for passwords I don't care about, I memorize unique passwords for passwords I do care about.
worshipping an arbitrarily contrived measure of password entropy makes for good security theatre, but there's a lot that goes into maintaining anything resembling actual security. How many people use "password generators" and trust that they'll come up with "random" words? What about that old saying about putting eggs in a basket?
> It stops a compromised server from silently leaking unhashed passwords
If you trust the site to deploy correct JavaScript to do this, then that's the same level of trust that they implemented password salting and hashing server side. You don't gain any robustness by moving this to JavaScript.
Your scheme is just a weak salting technique. You'd be better off with just using a longer salt and hash function.
> - That is auditable - it is impossible for a malicious site to do so without risking being caught.
Hardly. Minimization and obfuscation is trivial, and you can ensure the output is always different in order to defeat auditing. Not great for caching obviously, but 'auditability' is not achievable if the server is determined to fool you.
> - The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.
Password are simply not where you want to leverage your security. If you can find a document example of a real threat that this approach would have mitigated, then I'll take it seriously.
This is completely wrong. HTTPS is what secures this, not client side password hashing. If you don't use HTTPS, you can just get MITM'd to disable any kind of client side hashing.
You are wrong. Client-side hashing CAN be a silly thing, but it can also prevent a (compromised) server from seeing your password which you probably use on other websites (which is what most people do unfortunately).
>but it can also prevent a (compromised) server from seeing your password
If the server is compromised, then there is no protection of your cleartext password at all. This is because the entity that compromised the server can replace the original JS with anything, including new JS that sends your cleartext password off to their own host as you type each character.
The only activity on your part that can save you against comprimised servers is having a unique password per server (i.e., not reusing any passwords).
Not true in modern architectures, that situation only applies to more traditional file & api server combo's. If you statically serve your site with a service like s3 and have a backend running on lambda or ec2 - the attacker cannot modify the static assets and the client side hashing will prevent them from seeing the plaintext password.
The answer is that it depends. We could be talking about protected js with SRI, signed updates with an electron client, a browser plugin or native hashing, a protocol similar to SSH that hashes the client pw, etc.
This is only true when client-side hashing is under control of the client. In a web browser, it is not. The browser will happily run whatever JS the server sends it. So if the server is compromised, it can send compromised JS, and there goes your client-side hashing protections.
An example of where it might work is in an app, where you're getting the client code from a separate channel like an app store.
It can protect you against non-malicious issues on server-side. If I recall correctly, twitter recently discovered that they were logging passwords in plaintext by accident. With hashed password you reduce exposure of actual passwords in this type of situation.
I think totony meant sending passwords without pre-hashing, but yeah it doesn't make sense to send any confidential information in clear text that should be sent via E2E encrypted TLS channels.
Furthermore, pre-hashing doesn't necessarily make transmitting confidential information safer, as one would argue that your client side javascript can be reverse-engineered and give the attacker more information about how you hash your data.
Really your back end should just treat password's hashed just like any password.
Ideally, if TLS was being MITMed somehow such as a dodgy root cert. It would shield the users plaintext password so it could not be used to login into other services. The problem is as soon there is TLS issue an attacker can modify the Js to just send the password in the clear. It really would require code that can't be modified by attacker. This means that there would have to be some sort of browser support. Otherwise it does nothing against the attack it would protect against.
The main benefit is offloading some computation workload on the clients machine. This could allow you to increase the work load required to brute force the password hashes assuming your database leaks. (aka increase iterations or memory requirements)
You last argument is security through obscurity if exposing how you hash makes it easier to brute force the passwords your password hashing sucks.
Yes I meant sending the password cleartext inside the transport protocol*
pre-hashing doesn't prevent an attacker from stealing your account if it can read the communication, but it prevents it from having your password and using it everywhere else where you might re-use the password or a permutation of it
Almost any http site with a login form is sending your password in cleartext. Thankfully, initiatives like Let's Encrypt have made plain http sites much less common than they used to be.
Hashing the password before sending it doesn't really help you much - the naïve approach is vulnerable to "pass-the-hash" (where you basically send the hash instead of the password as the authentication token). The secure approach involves either some kind of challenge-response or a nonce salt, but these aren't as easy to implement correctly.
Indeed. And: who is hashing passwords on the client? As this would require either not using a salted hash, or sharing the server's salt with the client, in order to obtain identical hash values for comparison. In either case that system's entire password inventory would be a lot more vulnerable.
TLDR don't do that, send passwords over SSL and use a good password hashing algorithm on the server like BCrypt.
Yep. Proper password hashing requires per-credential salt, pepper (for all credentials) and a strong algorithm (IV, iterations etc.) Revealing all those information is a leak and arguably making client side hashing less secure (by giving away a lot of parameters for attackers to attack)
Yes, adding pepper is a recommendation not a mandatory step. But a lot of sites do, I.E. PagerDuty [1], paired with PBKDF2
as many apps requires to meet FIPS certification or enterprise support on many platforms.[2]
Your password _is_ whatever you send over the wire. Doing a hash in JavaScript before sending it won't obscure the user's password from anyone who can see their traffic; it will obscure the user's password from the user.
No, the password is whatever you send over the wire. If a website processes your attempt to type "password" into "5f4dcc3b5aa765d61d8327deb882cf99" before sending that to the server, then your password for that website is 5f4dcc3b5aa765d61d8327deb882cf99. That's what the server sees and how it recognizes you. The only effect of this is to make it less likely that the user knows his own password.
If user password is "passsword" he may be reusing it across 50 other websites. If you leak information that "password" is linked to "email@gmail.com" I can hack the 50 other websites. If you never knew that the user password is "password" you can not leak it and I can not use it to log in into 50 other websites. Leaking "5f4dcc3b5aa765d61d8327deb882cf99" is useless to hackers, because he cant go and use it to login into another website.
So, there are properties that differentiate "password" and "5f4dcc3b5aa765d61d8327deb882cf99", even if for the server it's all the same.
The distinction you're trying to draw vanishes as soon as this becomes a standard practice. Passwords are already stored hashed and salted. They get compromised anyway, because the data is valuable. Under the circumstances you describe, cracking 5f4dcc3b5aa765d61d8327deb882cf99 (which takes less than a second) is just as valuable as cracking a password database entry is now, because the underlying issue -- reuse of credentials -- hasn't gone away. (In fact, you're encouraging it, so it's probably somewhat worse.) As long as people are reusing credentials across multiple websites, those credentials will have value greater than that associated with their use on any particular site, and other people will put in the effort to crack them. Even when you're generating and submitting a cryptographically secure salted hash, you haven't improved on the situation now, where databases store a secure salted hash of the password.
How is sending the password to the server over HTTPS bad? What would you do otherwise? Hash it on the client? So are you not using salted hashes for your password store? That's far worse. Or you're hashing twice, the first with no salt client-side, then again with salt on the server side, which is fine, but the client-generated hash must be unsalted so is basically just the password itself: steal the client-generated hash instead of the original password, just as good with only minor loss in value (might not be able to reuse it on other sites for the victim; but actually maybe still could if you can build a reverse index of common passwords hashed using whatever algo is in use.)
And if you don't trust HTTPS to protect sensitive information, why would you send the auth cookies over it that have virtually as much power the password that was given in exchange for them in the first place?
> Hash it on the client? So are you not using salted hashes for your password store?
There is no reason you can't also salt on the client. Salts do not need to be secret. The substantial constraint you outlined in your comment isn't a problem.
If the client hashes the password then the hash itself is the password. Meaning stealing the hashes passwords is the same as stealing the plain text password for which they're based, since you can post them direct.
Blizzard entertainment does half client half server hashing which is rather clever, one of the few examples where client hashing makes sense.
I'm curious, how is half-hashing the password different from really hashing it?
The best protocol I know of is to derive a signing keypair from your (salted, stretched) password, and store the public key on the server instead of a password hash. Then during login, the server sends a challenge to the client, and the client signs it. The server never sees any secret material at all. Keybase uses a version of this protocol.
Unfortunately all the magical client side crypto in the world doesn't save you if the attacker can compromise your server and then send clients bad JS :p
Then when you receive spam/unsolicited marketing emails, you can see to which email the spam was sent, and therefore which company sold your data.
This suggests the only way to keep this behaviour is to have your own email hosted and use a truly different email per service.