The main creator of this has been posting about his progress for months, personally I've been following its development ever since I first saw it. It's such a neat project that I can totally believe this being entirely organic growth.
If the "Get Started" button - the first thing you see on the website without scrolling - does nothing but yield a white page that asks for an email address, you're alienating a lot of interested users.
I left at exactly that point. If you can't show me something other than Marketing info before I enter my email address then its a no-go for me.
You need to give something in order to get something from me. For all I know there is nothing behind the email wall. I'm frankly tired of email marketing and I don't want my email collected and sold off.
Looks like this site tries to give you informed choices. Choices that can only be informed with personal data. I'm not giving up my personal info in exchange for sorting credit card offers. I can find and sort the offers on my own without becoming a source of revenue.
I also left on seeing the email screen - I believe the OP of this thread had it right - I was unsure what was going to happen.
I went back and then started scrolling down and probably figured that it wants my CC purchase data to do this. I did not continue with signup mainly because of trust although the value proposition sounds good.
It doesn't verify the email address -- jeff@amazon.com wants another credit card -- But then the next step is mandatory linking of bank accounts! Yuck...
Also, how is "We never see, store, or use your credit card information at any time" consistent with "We analyze your spending"?
We only ask for your email address so you can sign in later to see your results. I agree it's a bit unintuitive that we don't "verify" your email, but that's because it's ultimately linked to your bank account.
"Credit card information" in this case refers to the fact that we don't have access to your CC number, credit limit, etc. One of the early feedback we got was that people were worried we would have access to their CC number, so we added that bit in to clarify. I agree that the wording could be clearer.
Why not let me fill out a form that says I spend x on food, y on travel, z on gas, and optimize my results for a combo of up to two-three cards, without having an account?
Even as a demo of how it works and the interface. Then if people want to automate that part, they can make an account and feed their bank data over.
You should also look into the features apps like MAXIVU have. I tell it all the credit cards I have, then if I cant remember my rewards, I can check what card I should use at what store.
This is definitely on our roadmap. For the initial version, we wanted to provide the fastest onboarding experience, which meant letting people connect with Plaid.
To clarify, we do get access to your credit card transaction history. We do not get access at all to your bank username/password, credit card number, etc. - that's the part that Plaid handles.
I clicked the link before clicking the HN comments. I clicked Get Started, saw the e-mail prompt, and immediately went "meh" and closed the tab and came to the HN comments to see if anyone had signed up to see if it was worth my time.
Attempted unjustified use of donation funds is not really any less questionable just because it didn't get past the treasurer - especially if the treasurer is, uh, constructively dismissed right after.
Holding off on the purchase after that event is the least they could've done.
>SSL is terminated at an edge location that is closest to the users.
So this is routing plain text http for most of the connection and at the same time giving the managed edge location direct access to the traffic and a valid tls cert for the domain? Isn't that mostly snake oil then, as the secure connection never originates from the target service?
Hi.. The TLS is terminated at the edge, and from that point we fetch the data from origin server. As long as the origin has SSL, the communication is secure end-to-end.
>As long as the origin has SSL, the communication is secure end-to-end.
It cannot be secure end-to-end, as your edge location is quite literally performing a MITM. That aside:
How are you validating the TLS cert that the origin presents?
Going by the info on your website, the possibilities are as follows:
Scenario 1: The SAAS provider presents a TLS cert not valid for customer-domain.com when accessed as customer-domain.com
Scenario 2: The SAAS provider presents a TLS cert valid for customer.saasprovider.com when accessed as customer.saasprovider.com
Assuming scenario 1, you would need to validate the certificate out-of-band as the traditional trust chain does not validate for the given domain.
Assuming scenario 2, you would need to rewrite the URLs from customer.saasprovider.com to customer-domain.com to prevent the users from following generated resource URLs to the origin domain.
Or am i missing something?
the "end"s are the _browser_ and the _origin_, and if there isn't a single secure channel that goes all the way between them, that's not "end to end".
I mean take the "piecewise" argument to its natural conclusion.
If the reason it's okay for you to be in the middle is that you're going to ensure that your request to origin is also encrypted, why should you be the only party in between that can decrypt the contents of the connection?
Why not let the ISP also decrypt the contents? What about the layer 3 interconnect providers? How about your cable modem and your router (they're _probably_ patched 'enough' that it's safe to let them see your plaintext).
I'm harping because misuse of the term "end to end" is _actually dangerous_ to real people.
All of this is to say nothing of the fact that when you allow "middle-boxes", the client no longer has control over the ciphers that are used for the end-to-end connection, so they lose control over perfect forward secrecy.
Been using AndOTP for months and i love that it supports android's keystore and device credentials for authentication. I had switched to it from Authy, which was quite heavy.
Aegis' design looks a lot less dense than AndOTP on the screenshots, though it seems to be widely recommended. I'll have to check what that's all about
The density is configurable in Aegis, 3 "View Modes" - Normal (what you see by default I think?), Compact and Small. You can then choose to show or hide the Account name to further reduce size, and it supports Groups to organize.
it's a nice quality of life improvement unless you're running low on space. 7+ gb is a lot, especially on embedded devices with low soldered storage - like microsoft's own products!
coincidentally instead of offering upgradeable internal storage they upsell really hard on the models with more storage in their surface lineup
if anything, this should be a toggle- if only an opt-out toggle.
fwiw, you can install a linux system including an entire graphical stack with web browser and mail client in that space, twice.
This is also going to be a major pain on my Windows VM's. I often only give those ~32 GB of storage to begin with, since they're only for a tiny handful of programs.
I'm assuming there will be a group policy setting I can change, though, since I'm lucky enough to have Windows 10 Pro...
> fwiw, you can install a linux system including an entire graphical stack with web browser and mail client in that space, twice.
I think you could fit quite a few raspberry pi's with GUI environments into 7g.
The OEM and MS alliance has been going for some time now. MS promises HW push, and the OEM promise to sell hardware only with Windows. I am surprised they didn't round it up to 10G.
> if anything, this should be a toggle- if only an opt-out toggle.
I agree, but that doesn't help sell PC's.
ZFS snapshots would reduce the space requirements.
>And besides: If Sinatra starts a server listening for incoming traffic, why does it still seem common to run a regular webserver like nginx in front of that server?
This is a common technique used for tls-termination and management of 'virtual hosts', among other things.
The mentioned "issues" don't seem to be Sinatra-specific, but rather about the authors shallow understanding of the topic as a whole
One of these "other things" is buffering of slow HTTP requests. Application servers are usually not designed to deal with that and are meant to be run behind a proper load balancer/reverse proxy. This is usually very explicitly mentioned in their documentation, for example in https://bogomips.org/unicorn/PHILOSOPHY.html
puma does pretty good at buffering slow HTTP requests though.
I honestly _don't know_ why we usually put apache or nginx in front of our ruby 'web servers'. But I keep doing it anyway, cause 'everyone else' does, and I don't want to take the time to be sure I don't need to, it works.
However, I believe rails deployments to Heroku generally _don't_ put another web server in front. Which, per your point, slow clients is quite exactly why they explained the switch from unicorn to puma as a default web server. https://devcenter.heroku.com/changelog-items/594
It is true that in 2018 web dev has gotten complicated (in a variety of different axes), and if there's a framework/platform that will allow you to not know it is, I don't know what it is! It would probably be one that made a lot more choices for you though (like, say, ruby web server, so you don't need to think about 'oh, unicorn can't handle slow clients but puma can') -- which is the opposite of Sinatra's philosophy -- but then again Rails approach to try to do that has not resulted in something people find easy either. shrug.
I've stopped using Nginx in my Docker containers, and I just run puma directly. It's still behind a load balancer that also handles TLS termination. I also serve all the assets from the Rails server, but they're cached with a CDN, so it only needs to serve each file once.
If you're using Docker with Kubernetes, Convox, Docker Swarm, Rancher, etc., then I don't think you need to run Nginx or Apache. I ran some load tests on my staging environment with and without Nginx, and it didn't make any difference.
If you weren't caching with CDN, would serving those static assets as efficiently as possible be a good reason to keep using (eg) nginx, do you think?
Oh, I guess load balancing (with a multiple-host scale) is another good reason, if you don't have heroku doing it for you, nginx is a convenient way to do it just fine.
I don't know if there's any middle ground where you'd want to use Nginx instead of a CDN. If it's an internal app then it doesn't really matter. But if you have any reason to worry about the performance of serving static assets, then you should always be using a CDN like CloudFront or CloudFlare, etc.
But yeah, Nginx can be a great solution for load balancing and TLS termination.
I always used to do it for static asset serving, where anything standing between the socket and a sendfile() call is a waste of space. I honestly don't know how good Puma is at sending files, but I wouldn't be shocked to learn it was still worthwhile there.