There is also the possibility of timing attacks on either type of request. By the length you can tell when the HTTPS request is most likely POST /PWDStrength, and from the times that the request is initiated, you can guess at some characteristics of the password (maybe they stopped typing for a second to verify requirements after typing 7 characters; maybe they stopped after 8 because they have to move to the numpad on their keyboard).
edit: the best sopution for this is probably to wait a specified amount between requests, rather than doing it with each character.
That's a very interesting interpretation of the linked papers.
While timing information may make brute force attacks against the passwords easier, it is not feasible to reconstruct passwords based on the timing information exposed by Ebay.
It is also worth noting that the ability to perform more efficient brute force searches doesn't really matter in the case of Ebay, as it will not make such attacks feasible over the internet.
This attack depends on being able to identify individual keys so it's not really applicable here. However, a similar attack might be possible here if not for the very small sample size.
There are many failure modes for encryption that most people rarely think about. EX: If someone encrypts either the US constitution or Hamlet then you can tell based on message traffic size which it was. For a physical example, if collage rejection letters are a letter, but acceptance letters are a package then it's obvious to your mail room who got accepted.
This is probably secure, but non standard password exchanges open up a lot of possibility's.
Yes, a password manager likely negates this kind of attack. Although the timing info likely gives away that you're using the auto-fill (which isn't useful, just interesting)
One small idea:
Once the network traffic starts flowing, it's difficult to switch to the Javascript tab because it's constantly flowing off the screen. Maybe make the tabs fixed in place? Or have a check mark that can make them fixed or unfixed?
Very cool. You might be able to get some traction by using this to show some examples of common web vulnerabilities (just like this one) in the wild. I'd read a blog with those religiously. This was great.
Cool. I was just using ScreenFlow today to capture a video of a bug for my team. This would have saved me alot of time today. Just signed up for the beta!
Sending a request on each keyboard event to determine password strength is not only a security vulnerability, it's also poor design. APIs should primarily be used to consume external resources, not stand in for client side functionality.
If providing an API for password strength is important (i.e. you want to guarantee the same behavior across clients), think of your business logic as a resource and not a service. Rather than force the API figure to it out, have the API deliver the criteria for this behavior (regex strings, bounds of password length, etc.) and let your clients figure it out. This addresses the security concern, decouples your client side and server side logic and improves performance across the board by reducing network requests and absolving the server of this responsibility.
If you must go with this design, at least move from a `GET` to `POST` like others are suggesting.
Sending your password as you type it as a GET request query parameter seems awfully hazardous. As you point out the password will appear in all manner of places, such as HTTP server logs. As the username/email is not included an ops person might not directly know from the GET request alone what user the password belongs to. It is not difficult to imagine however that they have enough info to correlate the IP address of the password strength request with a user.
> there are some reasons behind our current solutions but I wouldn’t be able to give you more details on it.
I'd be curious to know if anyone here can come up with a good enough reason for sending out the user's email & their password(-prefix) at every keystroke?
This actually just sounds like a really bad implementation. Some front-end dev wasn't sure what's a good timeout to fire the password to the server on, so he or she just put it on keypress.
And then he included the email too, so the backend could look up the user and make a custom password blacklist for this specific case (eg: no personal details allowed).
I actually don't disagree with doing a POST of a password to check password strength server-side. It might be "cheaper" a bit in some cases.
But sending on every keypress and including the email - that's just silly.
I am well aware of that, and agree with you. There had to be some seriously bad decisions made here, and it certainly doesn't look like someone from a big company would make such mistake.
Yet, those kind of bad decisions are made every day, by people all around the world. I wouldn't give benefit of the doubt to anyone these days.
Yeah, but being a powerhouse doesn't mean they don't introduce silly bugs. They do. E.g. on Facebook, a year or two ago, you could use dev tools and change hidden input field's value when writing a post and post to anyone's timeline (this story got tons of coverage for a bunch of reasons, vulnerability itself not being the prime one). Does it seem like a silly bug? Definitely. But it happened, it's not the first one, not the last one.
So it's a bit naive to assume devs at popular companies don't make bugs, they are superhumans, etc :)
I worked at a Fortune 100 that does billions in online sales. You'd be surprised at how often little, improper things like this can just percolate into production. And then they're defended by the people who allowed it to happen.
>I'd be curious to know if anyone here can come up with a good enough reason for sending out the user's email & their password(-prefix) at every keystroke?
I wonder if it ties into their fraud detection systems somehow.
Fraudsters are lazy - so lazy that, for a good long time, you'd see the exact same few recycled photos of counterfeit items being used in item descriptions. No idea if that's changed recently.
Anyway, going back to my main point: I wonder if something about password entry and email address choice serves as an early warning flag.
I'd kinda be surprised, but I could imagine it potentially being useful.
That might be it. They might use it to detect if someone is pasting a password in vs typing one in. Which might help identify against bots / attackers stealing someone's ebay account.
Which would explain why Ebay would be secretive. Because the detection is easily mitigated if attackers become aware of the detection.
I guess that fraud could be a big part of this, getting every character in sequence says way more about the end user then getting only the password in the end.
I wonder how this will affect password manager users though.
If this was the case, I would think that a single request where they record the timing between characters clientside and post that timing information along with the password would work better. Timing incoming POST requests as part of a single password reset "session" seems fraught with problems, I can't see how you could really trust the timing numbers you would get. I type my password pretty fast generally and I wouldn't be surprised if the margin of error on that timing is a significant percentage of the average time per key press.
Of course you can't trust anything from the client and both methods are subject to tampering, I'm not sure which is more tamper resistant.
Both use cases could justify calling an API at every keystroke, where you send out either the user's identifier in the one case (to extract the timing info), or the password(-prefix) in the other (to check for typing errors). Linking together these two is where it becomes especially dangerous.
You can use a client check to check for the basic requirements, like minimum and maximum length, characters required or allowed etc. Then when the user submits his password, you can do a serverside check.
The reason for using a server-side solution is for a password strength indicator. You need the full algorithm to run against the current entry, and every user-friendly implementation does this on every character input so you know when what you have typed is "strong enough". I'm not particularly a fan of password strength indicators in general, but if you're going to do it at least do it cleanly.
Is their algorithm so complex that it can't run in a reasonable time using client side javascript? I have trouble thinking of anything that isn't vastly over-engineered and runs slower than the network lag probably is.
Perhaps their algorithm isn't written in javascript?
Or perhaps they want to check the user's password choice against a multilingual dictionary, and they decided to save the user the multi-megabyte download?
No, it's more about leaking information to the JS client. If, for example, their password verification rules stipulate that you can't reuse any of your last N passwords, then they would need to make this check server-side as they don't want to provide that information to the client.
Which sending a POST on every keystroke won't really help with anyway, because they can't tell that you typing "h-u-n" will match your old password of "hunter2", assuming it's properly hashed.
If I were in charge of both requirements and implementation I'd debounce the input by 300-500ms and display a "loading" spinner in the password complexity box until the debounce timer and network request had fully resolved.
I was just trying to explain why, given some business use-cases, doing password validation on the client isn't always possible.
I think an overly complex password strength checker is probably dong it wrongly. Besides, when it comes to password complexity you need to tell the end user what the rules are!
Combining a password strength meter with password rules is the sort of thing where most implementations are going to get it wrong. It clutters up the UI a lot, to the point of being too complicated for "average" users to understand.
I prefer seeing one system or the other, but not both.
If you have rules, don't use a password strength meter. Rules are going to be a binary result - "yes the password is good enough", or "no, this password is not acceptable" - which is represented by showing which rule(s) were not respected by the user, or a green checkmark.
A password strength meter can be used when there are no rules enforced, to let the user decide for themselves whether that red progress bar showing a weak password is good enough for their needs. The meter doesn't need to be binary, with red/orange/yellow/green stages. The only reason I don't like password strength meters in general is that there is no standard across sites/services as to what constitutes a strong password. One site will show "password123" as weak, while another will say it's fairly strong. Each implementation has its own arbitrary algorithm that is likely not representative of "true strength".
This might get downvoted because it's just a link, but:
zxcvbn is actually a great password strength library, JavaScript, client-side, and only about 400 kB or so last time I checked (compressed, including (!) dictionaries). It was developed by a Dropbox engineer for the password setting/changing dialog at Dropbox, and open sourced, if I'm not mistaken.
Again, this is a great tool, client side, small (smaller than most webpages and adds these days at any rate), and it also allows to provide a list of "custom black list words" not to use in the password (e.g. username, site name, etc.).
AFAIK, zxcvbn really is the gold standard here.
Given this, I don't really see how a server-side check is better or necessary. Ebay really ought to provide a much better answer than "trust us" here.
Why do it on the client and not on the server? It's just an implementation decision. I would never bug someone for choosing either one over the other. Most implementations probably have issues with the employed algorithm, best not to waste effort debating the delivery method.
Or they want to disallow reusing previous passwords, without leaking them to the client.
As an aside, I have always wondered how it is possible to disallow reusing previous passwords if the password is only saved on the server as a salted hash, which is recommended I believe.
How common is it to have internet fast enough that a POST request completes between characters? I would expect it's completion in a second or two, enough time for a human to type about 5-15 characters, making the timing information completely meaningless.
This would require that after a second or two the post requests will be instant, and after that second or two they will travel back in time to be delivered simultaneously with all the other requests.
The timing information, despite possibly arriving with a bit of a delay will be just fine. Not only that, but if they really wanted they could just grab the TCP timestamps.
Not a good enough reason, or sane, but what if it's a honeypot of sorts. Perhaps eBay is using the timing information themselves to flag hack attempt sources...
Ugh, I had to do this once on the sign up page at a small company I worked for about 10 years ago. Ever since then, I've been weary about beginning to fill out any forms unless I really, really want them to have the info. I still think its messed up to store user data that hasn't been submitted.
Yeah, there are shady techniques from companies offering cart abandonment solutions that do exactly that (e.g. VEInteractive). If you type in an email address and don't submit the form, they'll still send you chaser emails.
I think there are valid reasons to store incomplete form data, but I don't think it should be used for reasons the customer did not intend (e.g. receiving emails).
This one is interesting. Looks like they don't send any requests for username. For email address, they have a delay before sending to the server to see if it's a used email address (if you type quickly enough, it will only send a single request to validate the email).
For password, they start sending every character you type once the field has 6 characters in it. It then sends your full form details on every keypress (plus it has a delayed send to the same password_strength call, similar to what they do for emails). So if you type your password slowly, it will send your details twice for every keypress.
> This is not a security vulnerability itself because I think they have implemented this for some reason
IMO just because the behavior is by design doesn't mean it's not a vulnerability. That said, this one seems like a grey area. I'd be worried about password information leaking by making TLS attacks easier in this mode.
This only affects a specific form that the user might interact with once a year (and that's being really optimistic), I don't really see it generating enough requests to make TLS attacks easier.
If it increases the attack surface at all, it makes it easier. Being that this site facilitates monetary transactions, I would hope they would be trying to limit their attack surface in any way possible.
I think the real point here is that there are more secure solutions. Saying that it's not all that less secure isn't a great argument.
>I think the real point here is that there are more secure solutions. Saying that it's not all that less secure isn't a great argument.
I'd say it's a very good argument, this appears to be a non-issue that doesn't justify the dev time spent on "fixing" it. We don't live in a world with infinite dev resources.
Edit: Since someone appears to disagree, how would you exploit this "bug"?
Parameters sent via GET can get cached by proxies and they appear in log-files.
Not to argue in favor of sending sensitive data via GET, but I think it is worth pointing out that third-party proxies cannot see the URL or other parts of the HTTP headers or body when the connection is using HTTPS.
But there is a good chance that these GET parameters are logged by the webserver. Even if these servers are very secure and strictly monitored, one bad employee can cause a lot of trouble.
Perhaps, but an employee in that position can steal credentials even without GET logs.
This entire discussion is predicated on a contradictory assumption, that an employee would be corrupt enough to steal credentials from web server logs, but not corrupt enough to steal the same credentials from any other source (inc. database access).
It is like letting a criminal into your home, then being concerned that they might see your security system's pin written on a sticky note on the fridge. Sure, it is a problem, but ultimately the criminal doesn't need that pin to steal your shit, you already let them walk right in.
GET logs end up in all sorts of places. I would not be at all surprised if anyone working at EBay could get access to them. Not to say they should have access to them, but access to the logs is different from access to the server. Log reading permissions have a rightfully lower standard than ssh/deployment permissions.
(But part of what makes it OK to have more people with access to the logs is you don't put things like username/passwords for all of your customers in the logs.)
With that logic it doesn't make sense to store passwords encrypted in the DB then either. If an outside attacker gains access to a system it would really suck to have a bunch of passwords sitting in logs unencrypted. Security in depth and all...
Often times server logs are sent to other locations (such as central locations) for storage. This can be storage for compliance purposes. I wonder if these are logged and sent to some other location. They may be visible to a great many people who don't have direct server access.
In general don't log sensitive information because you don't know how those logs will be used.
Not exactly. In corp/uni environments there may very well be a SSL-stripping proxy - and it works because in a corp setting you have the fake ca cert installed by IT, and in uni you often have to accept a cert when first connecting to the uni VPN.
No, it does not, because usually an appliance will have some sort of logging - which will usually include the URL, which in turn contains the GET parameter.
If there is an inserted CA then I believe any cert from any website can be MITM'ed and there are appliances that do this.
From PaloAltoNetworks website:
"... firewall proxies outbound SSL connections by intercepting outbound SSL requests and generating a certificate on the fly for the site the user wants to visit."
I'm writing a Html5/Js end to end encrypted chat app for just this scenario. It won't stop a nation state modifying requests in transit and injecting their own js but it will probably stop a nosy sysadmin.
Pinning doesn't work against the "corporate CA" scenario, at least if the user is using Chrome:
Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
Thanks for that. I had no idea. I wish there was a way to grab the certificate in Js. Just so you could alert the user that they are MITMed. As it stands I will have to instruct them to check manually.
I've considered it but the overhead on the server side would be too much for the ad-free (ad's = loss of privacy IMO) and non-monetized vision I have. I use diffie-hellman to distribute an encrypted master key to each client that is used initially for the chat. I'm going to tell users not to consider that private (I can certainly man in the middle it from the server as I'm the one who generated the key) but they can use that key to discuss what private key they will use and then enter it manually (my brown dog's name + my birthday with only the first letter of my last name capitalized for example). In that master key I sent earlier there is also a salt to add to the hash of the password they select so even if the key they pick is weak it still might protect them. Everything is wrapped up in AES256 thanks to the Stanford Javascript Crypto Library.
I may use openpgpjs down the line for private messages within rooms. I also want to experiment with WebRTC for private messages and maybe offer some opportunistic peer to peer connections but I haven't gotten the far yet.
Why is it that we didn't improve HTTP Digest Auth but let everyone implement their own mechanism, where the number of those using a challenge response protocol is not worth a mention? Do we have to wait until 2018 before https://tools.ietf.org/id/draft-yusef-httpauth-srp-scheme-00... can be a thing? Not saying SRP is the best option, but compared to what's implemented on websites right now, it is much better.
EDIT: I probably am missing details, but surely some secure challenge response protocol must be available for broad implementation in browsers without concern for patents, right?
SRP is an "Augmented PAKE" which does not require the server to ever see the plaintext password. I'm not aware of any others that are claimed to be patent-free.
Avoiding patents of other protocols seems to have been one of the goals, but then Thomas has patented SRP itself. https://www.google.com/patents/US6539479 which is set to expire in two years minus 15 days (Jul 14, 1998).
2 or maybe 4 years would be reasonable to earn back (some or all of) the investment, and allow others to improve upon and maybe even patent the new invention. As it stands, whole industries are held back due to 20 years for patents.
For those who didn't read TFA - it does this for the password strength checker when creating a new password, not when logging in.
Honestly, I can see the challenge here. A truly robust password strength checker would use dictionaries, making it too heavy to run on the client, and for usability reasons you'd want it to check on keypress.
But it would be nice at the very least if they'd send it as POSTs in the body, not GET parameters.
The general argument here is server logs. You'll see the entire url show up for GET. By using a POST and actually putting the data in the post body you won't see it show up in logging.
The ones used by security experts are in the GB range.
Obviously you could do more efficient approaches like converting characters to recognize that P@ssw0rd is just Password, but then you've increased the algorithmic complexity you're sending to the client. If you want to get super-fancy, you've got to find word boundaries and whatnot to find that MyP45512345 is really just MyPass12345.
Of course, the simple brute force approach (server-side check if my password in this 5GB db of passwords?) might be too slow to use for this case anyways.
> The ones used by security experts are in the GB range.
Citation? The only multi gigabyte "dictionaries" I've seen are rainbow tables. I'm genuinely curious why you'd need multiple gigabytes when the Dictionary.com app a few years ago was no more than 200 megabytes.
The (most excellent) zxcvbn password strength checking library [1] (developed by an engineer at Dropbox) is 400 kB (compressed) including dictionaries.
I knew there was a reason I always prefer POSTing data as opposed to GET query params.
It still gives attackers the knowledge that if they can get access to the logfiles, they can see passwords. Then the problem becomes getting access to the logfiles!
Any leak of relevant information about security is of potential value.
It's less of a concern about an attacker gaining access to the log files, as it is that passwords should simply not be stored plaintext... anywhere. One doesn't really need to ask "why", it's just good common sense.
They almost certainly do this to detect bots trying to change passwords. If the bot tries to change passwords for hundreds of accounts at once they will end up sending thousands of requests to the password checker and be ip banned and it can silently just reject every password they try to submit to not tip off the attacker that they have been detected.
It is a terrible way to implement bot detection but with ebay owning paypal they are on the hook for lost revenue so bot detection probably takes higher priority than other security due to the actual economic impact of bots who steal hundreds or thousands of account at a time being so bad for them
How has no-one here made the observation that the reason for this is due to true password strength checks, that use existing password distribution data that is prohibitive in size to send to the browser?
They're not doing the wrong thing, and the risk of side-channel attacks on this infrequent behaviour (i.e., not authentication) are trivial compared to the risks of high entropy passwords that are also highly reused, and are thus vulnerable to trivial brute force attempts.
All those people who think that Amazon don't want anyone to work out their password complexity algorithm... You just generate a script that works out the minimum number of characters and then submit a password list to the strength service. Then you'll know all the strongest passwords according to Amazon, and from here you can hopefully find patterns to construct rules around running dictionary cracks.
After reading the public post and these comments, do you think they (eBay) will give a better explanation...or better, an explanation...as to why they do this? Passwords are becoming difficult to maintain, even with a password manager. They should've, at least, obfuscate it in some way.
One that I use regularly that seems to be missing is
'set <N> <hour/minute> timer'. Also and amusing request from my father (who uses voice commands far more than I do): 'is there some way to print all of these out?'
I wonder if there is any example of a large corporation taking action after a flaw is submitted via an online email form? I think those forms are sent to people who's job it is to disregard their content as much as possible.
It is far safer not to do it in the first place. I can easily see a new sys admin coming in and wondering where all the logs are for a url and enabling it.
Or they send their logs to an analytics firm. The firm says innocently enough "it doesn't look like we are getting all the logs" and then it is turned on.
There's a lot of ways a policy can be circumvented just because people were trying to do their jobs and didn't know better. Also it is highly unlikely that they have another process to confirm that they aren't logging that url
Because (a) it's very unlikely given the use of POST elsewhere that they even realised this was using GET, and (b) other services can log URLs, such as your browser's history. By default, it may not matter, but perhaps of extensions get involved...
It's true, this isn't a straightforward vulnerability but it doesn't seem to be well-considered given the inconsistent use of both GET and POST for the same terrifying call.
This is not really ideal either, because the hash becomes the password. If an attacker got the hashes from your DB, he would only need to send the stolen hash to the server to authenticate.
The ideal way to deal with passwords would be something like SCRAM [1], but you are adding a bunch of complexity on the client side, and you'd need to trust your JS libraries.
Hashing it on the client side doesn't really have any positive effect on security as the client must then know what salt is used for the hash. This is less secure than just hashing on the server as the salt and number of hash iterations is then unknown by the client (or potential attackers).
Whatever the server receives, it should do all the good things, salted hashing and what-have-you. But no one says what it receives needs to be a plaintext password.
Hash on the client side before sending- unsalted, or salt there as well and pass it along to the server- but let's just ensure that the server never has the ability to see a plaintext password. It can't log it, it can't accidentally leak the plaintext.
Will that solve all problems? Oh, hell no. But it at least strengthens the mitigation against certain attacks or mistakes.
If you hash, with or without salt, on the client for changing the password, you'll also need to hash identically when checking it (i.e. for login). In effect, the hash becomes the password; even if the plaintext is never leaked the first-level hash is just as good for access.
Right, but if a hacker releases a password dump for site X, no one has your password in plaintext, just the log in hash. That said, that solution requires JavaScript.
Yes, but then the attacker can ignore your JavaScript and just send the hash value they got from the dump. If you calculate hash(password) and send that for comparison to the hashed password stored in the user database, then hash(password) is your password from then on.
Isn't the salt not necessarily supposed to be impossible for the attacker to find out though? like isn't it usually like the username, since it needs to be different for each user but not impossible for the server to figure out?
I'd agree if the client in some way exists independently of the server. For example, if you have a smartphone app, then client-side hashing could be useful.
But for a web page, what's the point? The server is in full control of the JavaScript they send you. If the server is compromised, it can easily bypass the client-side hashing by sending your browser different code.
Sorry, as I keep saying in this thread, the excellent client-side password strength checking library zxcvbn (https://github.com/dropbox/zxcvbn) is under 400 kB compressed including dictionaries, checking against
"30k common passwords, common names and surnames according to US census data, popular English words from Wikipedia and US television and movies, and other common patterns like dates, repeats (aaa), sequences (abcd), keyboard patterns (qwertyuiop), and l33t speak"
Thus, it is, as a matter of fact, quite possible to do reasonable password strength checking on the client side with a footprint that's a small fraction of many of today's ad-infested websites.
Do your strength checks in javascript client-side, then hash, then send. Server side can do further checks if it wants on the hashed password (hey, this password was already used, etc).
Password strength checking is (properly understood, in my view) providing help to the user, not enforcing some silly and annoying "password validation" rules.
Neither bot nor copy/paste detection require this. If you're checking keystroke pace/timing you could simply send a keystroke or clipboard event without the actual contents.
Also, like, if they're sophisticated enough to be doing that, they should probably get the basics right.
Bad title: it works only while you have the password field focused.
Bad content: when you log in or register you send your password to the servers anyway. It's irrelevant, since all connections (as shown in your post) are made with https.
One could argue "they are seeing what you write even if you haven't sent it yet", but meh, it's just a damn password field, not a chat field.
As I said in the Post, it is not a security vulnerability itself, but I want to point out that it can be very dangerous to put a password in a GET request.
And the response of ebay is bad too.
But thank you for your constructive comment ;)
If you're on https ebay, sending GETs to https ebay then the GET parameters are not sent in plain text. The owasp article you link to mentions that GETs can be sent in clear when you have a mixed http/https scenario. I think your screenshots are a little misleading, as not all of the headers and information you show are sent in the clear when using TLS. The response of ebay seems OK, this isn't a big issue at all.
EDIT: Sorry for the misunderstanding: as mentioned elsewhere, the problem is not so much the user-agent end, but the hops between where the decryption happens and where the information is used. Why expose the information more than needed there? So I guess ebay's response is a bit lacking. They could make things more secure with relatively little effort.
Except that ebay's response was to the POST over https he mentions in the first section of his article. There is absolutely nothing at all suspicious about that. He wasn't looking into a potential security hole there, he was just prodding as to why they do server-side validation in a completely secure manner. His email had nothing to do with security; he was wasting someone's time asking about implementation details.
He then went on to find a GET version in another area on the site, for which he makes no mention of having sent an email. This might not be considered a security problem to ebay depending how they manage web server logs, but it's certainly a viable inquiry compared to the POST version he did email about.
He's got a point that sending via GET is kinda dirty. I'd hope ebay logs don't log the full url of requests. Sure, getting to those logs would be super difficult, but you really don't want plain text passwords anywhere.
In general though, yeah, not that exciting of an article.
Https is often terminated at a relatively early point, eg the load balancer, so that the request can be properly routed. (Eg if you use AWS, it's generally terminated at ELB). That means the request path may be logged by the load-balancer and whatever routers/proxies they're using, as well as in the request logs of the web server itself.
It's completely unnecessary to have everyone's passwords be viewable by however many people have access to one or more of those logs (for a org the size of ebay, maybe 10-100 people?).
Sure, it's not as terrible as if it was sent over http, but 'not being as the worst it could possibly be' isn't a very high bar.
Are you implying that POST data isn't going to be transmitted in cleartext beyond that point? Because that's incorrect - HTTPS doesn't selectively encrypt - the whole connection is encrypted. If you're worried about GET data being sent in cleartext, POST is no different.
The point is that GET parameters are more likely to be stored in server logs or other application logs where POST body is usually discarded from such logs.
So someone getting access to the logs will have access to a lot of possibly sensitive data, that's all depending on server and application settings, but by default GET are more likely to leave traces than POST.
Ahhh, understood. There's someone between what you see as "ebay.com", where the GET is decrypted, and the actual ebay machine that will use the password information. I was thinking of it at the user-agent level. The GETs never leave your machine in plain text. I did not consider that the other end where you send the information could be "flakey". Bloody hell, it's a miracle anything works at all.
Maybe ebay do the right thing and terminate on the final endpoint, and keep their logs appropriately secured. We shouldn't assume the worst without knowledge.
As others are saying, using a GET request embeds that password in the URL, which means that server logs on eBay's side will have your password in them. Server logs aren't always the most protected thing in terms of locking down systems and permission management. On the flip side, most server logs do not have POST/PUT data logged.
Ummm... you're assuming that eBay is using a standard web server configured in some default manner. It's far more likely that this is communication with a custom authentication server of some sort. (Where server means a very large collection of machines.)
It's likely that eBay's internal infrastructure has compensated for this, but it also seems like a potentially overlooked aspect of their system. Even if there are no server logs per se (unlikely), they might be sending request logging information to some sort of analytics server. Since these requests are internal, it's also possible that it's not SSL-protected meaning that people internally could eavesdrop on the requests.
"Normally" ? What refrain you from logging HTTP Body ? It's the same problem as logging HTTP query string.
You should consider everything you send over HTTPS public for the receiver in any way.
The passwords are not necessarily being captured in logfiles, that's a huge assumption. We don't know anything about how eBay stores and manages their web server logs.
I don't know where you got the information that they do not log these requests, but it is a good assumption, not a bad one. It would be atypical not to log every https request.
A lot of setups have one machine doing the SSL and then forwarding the requests over HTTP to backend servers which are logging the requests and would include GET parameters in the log file.
edit: the best sopution for this is probably to wait a specified amount between requests, rather than doing it with each character.