Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An advanced browser fingerprint calculator aimed mainly at Tor Browser users (irisa.fr)
158 points by jerheinze on March 21, 2017 | hide | past | favorite | 77 comments


I worked on designing tracking scripts for six months (Sorry. Fortunately they aren't in production). Fingerprinting was very difficult to pull off in practice: even with canvas fingerprinting, font enumeration, plugin enumeration, etc. most mobile phones are still indistinguishable. Desktops are easier to fingerprint because they often have unique browser plugins or a unique set of fonts installed. Even with desktops, fonts and other settings usually change within a matter of days, so its difficult to identify a user unless they're browsing from the same ip address you've seen them at before.

What used to work really well were Flash cookies. Adobe had a security hole where Flash cookies weren't cleared when you cleared your regular cookies. The only way to clear your Flash cookies was to open the Flash application on your laptop and clear all content, or visit a special webpage Adobe built to help users clear their cookies. So for years marketers could store any cookies they wanted this way. This only ended when Chrome began embedding a version of Flash into the browser so Flash cookies could be deleted when other cookies are deleted.

The other mechanism that was really interesting was ETag tracking. When you request a picture or other asset from a website, the website can send you an etag id which is supposed to signify the picture's version. When the client revisits the page, the client sends back the etag to confirm the version cached is the same as the version on the server. The security leak is that the etag protocol allows arbitrary text to be set as an etag, so to set an etag cookie all you have to do is place a 1x1 pixel on each page with a random GUID, and when the user revisits the page the browser will resend the tracking etag in its request for the 1x1 tracking pixel. This works for browsers with cookies disabled, and will remain when cookies are cleared. The only way to clear it is to clear all browsing history entirely, including cached images. Fortunately, Chrome now clears cached images by default when you clear your cookies.


There are a whole BUNCH of other ways available. In practice, no single thing is even close to 100% accurate, but once you have enough distinct indicators you can fingerprint to a pretty high level of accuracy (for many advertisers 80% accuracy is enough, and it's probably not worth spending lots of effort to go above 90%). Clock skew, for example, is slightly different on each device, but on a given device doesn't change that quickly. Mouse/finger movement patterns, TLS negotiation timings, etc, etc. There are even technologies able to track users across different devices, but it's been too long since I worked in this area to be able to speak confidently about the methods used to implement that.


While you were working on it, how did you feel about the ethics of trying to unmask people who are clearly trying to browse privately?


I thought it was very questionable, but I was in debt and that was the only job I could get at the time. There's actually a positive use for browser fingerprinting in that it could theoretically detect Mitm attacks or stolen cookies. Basically, if you place a unique cookie on each device, and then a user/device shows up with that cookie but a totally different browser fingerprint, it's possible that the account was hacked. The client I was working for was a bank so I was pushing for it to be used in this way. But realistically, I knew it was probably going to be used for marketing or privacy invasion but felt like I needed to keep the job to pay the bills.


The detecting of hijacked sessions with browser fingerprinting (mouse movements, typing speed, etc.) is a very neat idea - something that could be used to throw off red flags or have users sign-in again with the appropriate warnings and education.


I appreciate your honesty, thanks for replying.


Besides stolen sessions, using browser fingerprinting for detecting login attempts with stolen passwords is even more useful.


I understand where you're coming from. In a past life I trained defence people how to build better software for their killing machines. Sometimes you just have to pay the bills.


That argument is terrible. If you are smart enough to program weapons systems for a defense contractor then you are smart enough to make money doing something else.


The issue is convincing the other people that someone who does that doesn't need to be an "Angular Ninja" or know by heart the complexity of several sorting algorithms


This. I understand that sometimes you might need somebody who can hit the ground running with a particular piece of technology but, most of the time, what you need is a competent engineer, regardless of whether they are currently intimately familiar with your technology du jour.


I was fresh out of uni, teaching Ada and C++ best practice, along with toolkit and processes. Literally spent my last penny getting to the job interview, then lived on credit and supermarket end-of-day markdowns for six months.


Only because it has no bounds, like most schools of ethics, but it is responsible financially in this society and Op doesnt work there anymore


I live in a country which is fighting off an external aggression for the last several years, and here "making killing machines" for defence contractors is a noble thing because you are literally saving lives with your work. And it's close to volunteering because you won't get paid proper money on such a job (because the money paid to you would likely be donated by other citizens anyway).


I see where you're coming from but that war looks unwinnable to me for either side. You have reached a stalemate so further killing kinda seems pointless in light of that. I've seen civil war first hand and how it can be all too easy to rationalize the killing of "them". I'm not passing any judgment just saying. Anyway, we digress too far.


For us, winning means stopping their advance. This is not a civil war, there is a clear external force to counteract. What you call "stalemate" is actually a big achievement for us.


How far does this logic apply? If it covers building machines used to kill without justification (I'm assuming you wouldn't point this out if you thought the machines killed justified targets), does it apply to someone running a website teaching people how to use malware? Someone using malware? Someone running a website dealing in harmful material that is mostly illegal? Does it matter if we say the person lives in a place where the local authority allows for this activity?


Understanding how unmasking works by a benevolent party means we provide competition for the malevolent parties, for the benefit of those wanting to browse privately.


I'm not OP but I worked on something similar that also never made it to production.

In our case we were going to use fingerprinting to prevent sweepstakes fraud. People would enter multiple times despite the rules specifying you can only enter once. A lot of people were not sophisticated so much of it was easy to detect (eg same contact info but different email). Some were less easy to detect so I figured fingerprinting could help.

I never finished it so it was never tested in a live environment. We wouldn't have shared the info with anyone except in the case of fraud so I don't see it being a problem.


Hello, I feel like taking some downvotes today.

In the same way that a vendor can't force you to honor their intention for you to view, read, and seriously consider the ads sent alongside their content, you can't force them to honor your intention to maintain privacy when you're visiting their servers. If it's fair game for you to mangle their content inside the browser before it gets presented to you, it's fair game for them to ask your browser to send back some bits so they can derive benefit they want. It's your responsibility to select a browser/extension combo that is providing the level of privacy and anonymity that you desire.

There's a bit of an escalatory reciprocal at work here; users demand more and more for free, and companies are left with fewer and fewer ways to get consumers to consciously pay for their goods and services.

I have never worked in adtech or user tracking, but I understand the mindset of those who do, and I don't feel that it's at all inherently evil, as some people make it out to be.

Part of the reason that sophisticated spam operations like private link rings and social media astroturfing are a required part of today's web marketing toolbox is because users are doing everything they possibly can to avoid seeing advertisements alongside their content. All this accomplishes is that ads migrate from explicitly-labeled boxes on the side of the page to the content sections, as "sponsored stories" on news sites, astroturfed comments on blogs, and stuffed product reviews on ecommerce sites.

I don't think there's anything necessarily wrong with end-users running an ad blocker extension if that's what they want to do, and by no means am I suggesting any restriction on their freedom to do it. I run such an extension myself (though I usually wait until I come across a particularly egregious site to install it).

But by the same token, I don't think there's anything wrong with a company attempting to extract what data they can from a user's interaction with them, as long as it's voluntarily given by the user or the user's browser. There is no reason that they shouldn't be free to attempt to fingerprint individual users who are visiting their site, especially since in most cases, the goal is to enhance user experience by providing ads they might actually find relevant instead of asinine.

Just my two cents. I know it doesn't fit everyone's favorite narrative of "business trying to make money are evil", but I think this group of frequently-maligned developers deserve someone to stand up for them once in a while.


I might agree with you about showing ads, but regarding tracking and collecting data there's substantial difference. By collecting and storing users' data, you put people at real risk, even if you collect with consent. The risk is data being stolen or sold and then used for malicious purposes.

For example, if you're a delivery service and you store delivery history, you put your customers at risk of being burgled.

So the ethical attitude would be to avoid collecting and storing anything you can avoid.


>I might agree with you about showing ads, but regarding tracking and collecting data there's substantial difference. By collecting and storing users' data, you put people at real risk, even if you collect with consent. The risk is data being stolen or sold and then used for malicious purposes.

I don't think that this type of fingerprinting is necessarily any more insidious/risky than any other ordinary activity one can undertake online. That's not to say that data is not valuable to someone with malice, but I don't think it's so especially dangerous that it's inherently inappropriate to collect it. Ultimately, it's the user's responsibility to find a browser that has strong identity protection features and use them properly if it's a major concern for them.

Things like storing a Flash cookie obviously contradict a user's intention, but I don't see that as automatically unethical just because the user would prefer if it didn't happen. At least not any more unethical than contradicting the publisher's intention of displaying ads.


You are completely conflating two orthogonal topics: not wanting to view ads vs. wanting privacy.

User A doesn't mind seeing ads, but user A doesn't want to be tracked by corporations that sell personal information so she clears cookies on each browser exit.

User B hates ads because reason (malware, performance, distracting, etc), so he uses an ad blocker. He doesn't care about fingerprinting across sessions so he doesn't bother clearing cookies.

Do you see how you are are throwing user A under the bus because you have a problem with user B?

There is no excuse for trying to correlate users trying to maintain privacy as a response to an entirely different group of users using ublock.


I don't have a problem with either of your hypothetical users. I simply believe that it's the user's responsibility to ensure that their browser is not disclosing more information than they want.

Servers are not committing any ethical violation by attempting to track unique user identities through non-disruptive utilization of normal browser features like cookies, plugins, and ETags.

The unifying thing is that companies want to display ads, but users don't respect that want and employ technological solutions to block ads. That's all well and good.

But by the same token, when a user wants to browse anonymously, there is no reason that servers shouldn't utilize the resources that are regularly available in a typical client-server exchange to attempt to identify the user.

The only contrary argument is that companies should be forced to respect the user's wishes, just because the user wishes for it. That's not how the world works. If the user is concerned that their browser may be leaking information, they should take responsibility and ensure it isn't. It's not on the company's head to ensure that the user's wish for privacy is respected any more than it's on the user's head to ensure that the company's wish to display ads is respected.


This logic is all based on not having any ethical concerns about doing something the user didn't intend to participate in. If you don't care about ethics, then you could just extend your argument to say that exploiting vulnerabilities in the user's browser to install tracking code directly into the binary is acceptable as well.

Are you okay with that logical conclusion?

>The only contrary argument is that companies should be forced to respect the user's wishes, just because the user wishes for it. That's not how the world works.

It actually is how much of the world works. What ends up happening is people with no regard for ethics keep abusing this until regulations get passed to force people to stop doing things they know people don't want.


>you could just extend your argument to say that exploiting vulnerabilities in the user's browser to install tracking code directly into the binary is acceptable as well. Are you okay with that logical conclusion?

No. That's not the logical conclusion; it's the extremist conclusion.

The difference is that with things like cookies and ETags, the browser is designed to return that information back to the server.

The server is not engaging in any malicious injection of foreign binaries or hijacking of program behavior. It's not doing anything to break outside of its allowed permission sandbox. It's using intentional, supported functionality in a non-disruptive manner. The only thing is that some people don't like that some of that functionality allows them to ascertain a user's identity after the user has cleared his/her cookies (ignoring here that the much larger privacy concern is the nearly-constant IP address of the user's home internet connection).

If that's the complaint, and in this case, it is, then there is no fault from the server. The user should ensure that his/her browser is making those privacy controls available. Ignoring the problem and shaming companies for making the connection misplaces blame and allows the problem to remain, exploited only by those whose intent may actually be malicious.

Obviously, there are many malware vendors who use their software to track users. I'm not defending them, nor am I defending those who exploit security bugs to get at data that they're not supposed to be able to access. That's not the case now; all of the fingerprinting methods discussed up to this point are fairly simple and do not involve coaxing software into any bad or illegal behavior, nor do they involve executing any spyware on the client machine.

>It actually is how much of the world works. What ends up happening is people with no regard for ethics keep abusing this until regulations get passed to force people to stop doing things they know people don't want.

No, it's not as simple as something "people don't want". Ethics are not about wants and conveniences, they're about behaving in a fair and honorable way. That's not the same as, and in fact it's sometimes the opposite of, what a customer may demand. Companies have obligations to non-customers as well.

Today, customers are demanding that web publishers make all of their content available at no cost and with no ads. If we take the attitude that "if you don't want users to strip out your ads with client-side software, don't publish it online", it's just as fair to say "if you don't want companies to correlate your visits with server-side software, don't leak information that allows them to do so".

If there's beef to be had with anyone here, it's the browser manufacturers who can't seem to figure out a way to engage in a normal conversation with a peer without leaking data that can be used to uniquely identify the user on the server-side.


>Ethics are not about wants and conveniences, they're about behaving in a fair and honorable way.

It's not ethical to correlate a user's sessions using etags when they have cleared cookies and caches (or set DNT header). You are expressly behaving against the user's wishes, and there really is no difference in exploiting the browser in other ways to reveal other personal information.

>Today, customers are demanding that web publishers make all of their content available at no cost and with no ads.

Did you even read my original comment? Privacy sensitive people and people that don't like ads are not the same set of people. Stop using ad blockers as a justification for unethical privacy violations.


>It's not ethical to correlate a user's sessions using etags when they have cleared cookies and caches (or set DNT header). You are expressly behaving against the user's wishes, and there really is no difference in exploiting the browser in other ways to reveal other personal information.

You don't see a difference between analyzing the data a browser gives back during normal operation and exploiting security bugs to force browsers to give up information that is supposed to be private? They're tracking your session based on the data that your browser normally sends. They're not crossing any boundaries or stealing any secrets.

The difference between using ETags and exploiting a bug to get secret information is that ETags were never meant to be secret or private. No one thinks they're "personal information". They're normal information that happens to be useful for uniquely identifying a user, as are Flash cookies.

The customer can ask politely not to be identified, but there is no reason that companies have to automatically consider that an ethical obligation just because customers wish it was that way. Customers also wish everything was free.

If you feel browsers should support completely untraceable sessions, that's a feature request. It's not an ethical violation on the part of others that they're not imagining this feature already exists.

>Did you even read my original comment? Privacy sensitive people and people that don't like ads are not the same set of people. Stop using ad blockers as a justification for unethical privacy violations.

We're not talking about specific people, we're talking about general principles. If ad blockers are not unethical in principle, then neither is server-side session tracking, because both stay 100% within the technical bounds without coercing any information from or disrupting the ordinary operation of the other party.

It seems that your conception of ethics really does boil down to "the customer wants it", so it's going to be hard to continue this conversation.


I came across a project that combines all possible ways to store cookies (can't find it now). If user deleted one type of cookie it was recovered from other types. The same information was encoded into 10 to 15 types of cookies (like flash cookie, image cache, etags, browser storage ..). I managed to defeat this supercookie - but it was not easy.


That sounds like evercookie[1] , which has been around for a while.

1: http://samy.pl/evercookie/


I took a look at that project too. In modern chrome at least, clearing your browsing history and cookies and cache clears all those cookies.


> What used to work really well were Flash cookies.

I heard from an adtech company that a patent troll was granted a patent on this (incredible, using an Adobe feature as designed wins a patent), and would wait until adtech companies reached a sufficient size before suing. They had a well known reputation of settling for a few million and weren't contested.


I have a few questions about finger printing.

Why Tor sends window size to servers? If it has to, why it can't send closest multiple of 100 instead of real value?

Why servers request size of client windows anyway? I assume so they can serve different resolution images to clients or maybe different layouts, is this correct? But then instead of sending 1920x1080, simply sending 1900x1100 would also work right?

Same goes for fonts, as soon as you install a few different fonts, you are pretty much unique now. Why does a browser has to send the fonts you have? Shouldn't it be possible to only send the fonts you have? Default fonts of OSes enabled by default, and new fonts are disabled?


> Why Tor sends window size to servers? If it has to, why it can't send closest multiple of 100 instead of real value?

It's not Tor that does so, by using JS one can determine canvas size. Worse, CSS alone can be used to extract.[1] You can take a look here for a live demo.[2]

[1] : https://matt.traudt.xyz/p/YF4ciVY6.html

[2] : https://system33.pw/cssmediapoc


This is all done client side with Flash/Javascript. For Tor it is recommended to disable JS, and I believe the Tor browser bundle comes with NoScript by default. Plugins should be disabled because they are very exploitable to de-tor someone.


Tor comes with NoScript but JS is still enabled. The very first thing you should do after installing Tor is disable JS.


The browser needn't directly send info to the server; it simply has to be available to client-side Javascript. That can then, in turn, send that info to the server, which (I think) is really hard for browsers to detect.



actually common browser window sizes are divisible by 128, which means that you could just bitmask all but the last few digits to achieve the same result in a few cycles


No, you actually have to display the content inside a pane of a common size because there are many ways to get screen size indirectly from content that's occupying it. Think responsive mode of developer tools.


Under the "Tor" tab it states that the browser should be set to a window size of 1000x1000 or a multiple of 200x100. Is this to stay consistent across all Tor users? I would have thought that 1920x1080 would be fine to help stay anonymous.


Assuming the browser window is maximized, even if most screens are 1920x1080 the browser viewport area will differ based on OS, window manager, and custom settings.


Yes, you want Tor browser window sizes in as few bins as possible. On Linux, it only accepts those sizes.


At least on OS X if you try to maximize it will give you a stern warning.


Surely there's some way to detect when a script touches far too many APIs such as setting several font families in succession. Then pause execution, warn the user about potential fingerprinting, and either disable script or blackhole its network requests.


TBB already does that for example with HTML5 canvas element, it shows you this box: https://pbs.twimg.com/media/C1_n50BW8AACLKq.jpg


Or, the browser could change the output of some APIs in subtle ways.


Have used several browser fingerprinting services and have tried a few of the techniques myself, they're incredibly useful for fraud prevention when said fingerprint is reported against a central database alongside with the fraud that happened to _you_. The next time said fingerprint shows up at an eCommerce site, they'll be blocked off from purchasing or at least flagged for additional verification.

They're also just useful for super targeted ads.

shrug


They could also be useful, in theory, for detecting session hijacking through cookie sniffing. Did somebody's device fingerprint change mid-session? It's possible that person's account has been hacked, so they should at least be made to log in again.


Why don't we just create a VM with the tor browser preinstalled ? Surely, it would be a lot harder to do fingerprinting. ETags would still make you vulnerable, but caching can also be disabled. Then you're left with cookies.


TAILS exists, but that doesn't necessarily prevent fingerprinting.


A good fingerprinting method (only applies to devices on private networks), is using JavaScript to enumerate the devices/services on a user's network (running HTTP(S) or other services if they are in the 'safe' port range).

You can also test for models/versions of a router on their network (for example, many routers allow access to static content such as images without authentication), so if a unique/uncommon image, CSS, or JavaScript URL can be accessed without authentication, then the user can be fingerprinted not just across browsers, but across devices as well (even in a VM). This is done using network timing (to test if TCP servers exist) and the onload/onerror XHR events which can be done even for 3rd party origins, by creating img or iframe elements.


I'm confused at the "Aimed mainly at Tor" part. A lot of these techniques use Javascript and the Tor browser blocks Javascript in general and warns strongly against enabling it. So it seems that part of the technique will be ineffective against most Tor users.


What? JavaScript is enabled by default on the Tor Browser Bundle, and users are not actively encouraged to turn it off.


It's complicated. Javascript is enabled, but NoScript is installed. But NoScript by default allows all "honest" scripts, although it does block some stuff. But that's in the default "low security" mode. The goal is having websites work, so users won't give up on Tor, while providing some security. Users can increase security, but there are only three options (low, medium and high). That's to keep user profiles in fewer bins. In the "high security" mode, all scripts are blocked.


I used NoScript roughly ten years ago and it was a lot of work to manually enable JavaScripts all over the place. I used it for a couple of years solid, but eventually gave up on it, since I was practically just enabling JavaScripts on every site anyway. Everyone I tried to introduce it to gave up on it after a few weeks at most.

To NoScript's credit, it did block a phishing site that I otherwise would've fallen for once.

That was before the era of single-page apps. Like it or not, JavaScript is mandatory for even basic functionality on the modern web.

Telling users to introduce 4 clicks to load every new domain and potentially experience some significant breakages (e.g., I remember some checkout processes failing because they'd bounce the request around between scripts from processors, fraud prevention, etc.) just on the remote chance that they'll encounter malicious JavaScript is simply a non-starter. Something like Ghostery that mostly-transparently blocks things is a better proposition for ordinary adoption.


An absolutely insane default IMO, especially when those who might need to use Tor the most are the technically illiterate.


> An absolutely insane default IMO

Why do you consider it to be so? To quote Tor Project's Mike Perry:

The reason we feel that leaving Javascript enabled trumps these concerns is:

1. We want enough people to actually use Tor Browser such that it becomes less interesting that you're a Tor user. We have plenty of academic research and mathematical proofs that tell us quite clearly that the more people use Tor, the better the privacy, anonymity, and traffic analysis resistance properties will become.

In fact, my personal goal is to grab the entire "Do Not Track" userbase from Mozilla. That userbase is probably well in excess of 12.5 million people: http://www.techworld.com.au/article/400248/

I do not believe we can capture that userbase if we ship a JS-disabled-by-default browser.

2. Exploitable vulnerabilities can be anywhere in the browser, not just in the JS interpreter. We disable and/or click-to-play the known major vectors, but the best solutions here are providing bug bounties (Mozilla does this; we should too, if we had any money) and sandboxing systems (Seatbelt, AppArmor, SELinux).

https://lists.torproject.org/pipermail/tor-talk/2012-May/024...


I won't address all your points because I'm phone posting, sorry. It seems to me that there are two ideal consumers of the tor product; the people who need to use it because they are at risk of being identified in meatspace (needsecurity), and the people that are required to mask the first (providesecurity).

The needsecurity group should not have js enabled because it has been shown to be insecure (correct me if I'm wrong).

I would be happy if there was a big, red, fullscreen, flashing dialogue when tor browser was started asking the user which they were.

I would/will be happier if/when major browsers transition to 'tor as normal' status and everyone is in the providessecurity group.


Would you consider putting up an email in your profile (even anonymized), or emailing me (my email is in my profile). I am interested in talking with you about Tor security research, and it seems you are an affiliated dev.


Sent you an email, still waiting for a response :P


Noscript itself promotes malware, has very shady history,and should be avoided.


Tails is very security-concerned, if Noscript was shady they wouldn't have included it. So if you think they purposely included something that promotes malware, you are saying people should stay away from Tails.

I think you are trolling. In fact, I wouldn't be surprised if you worked for some organization that wants to keep people off Tor.

If you want to defend you post, then explain what people should do instead of using Tails.


If you think NoScript is not shady, you must be unaware: https://news.ycombinator.com/item?id=12624000

I don't want to keep people away from TOR, I personally don't care about TOR project as such, but I believe everyone deserved privacy. And people or companies who believe it's okay to mess with your adblock filters and promote shady malware companines, don't work for the same purpose.


The update page for NoScript never appears in the Tor Browser, and it seems NoScript's maintainer deleted those ads after that critcism.


They only appear when you are using IE + Windows user string.

It doesn't directly affect TOR, but the combination above is the most vulnerable people and if the author is fine with giving them instructions installing malware, why should a TOR user trust him? It doesn't make sense.


I see your point about TOR, but what about tails?


Because NoScript itself is fine.


And mass surveillance is only to catch the bad guys, why bother with TOR/Tails, it's only for terrorists.


They were at some point.


I've noticed that the Tails and other privacy focused tools go to great lengths to look the same as other users. And as far as my understanding goes, this is somewhat tricky with things like canvas fingerprinting.

Is there a reason why they want to look the same? Could the same result be achieved as looking unique every time? e.g. Instead of attempting to make every canvas fingerprint the same, instead make every fingerprint unique by introducing noise.


There's a layout bug where the nav bar covers content at >768px of with to whenever the logo and nav links are on the same row again.


I symphathise with that, but HN is a US-centric liberal echo chamber when it comes to political issues, so everything that got to do with government, intelligence and surveillance is automatically labelled 'evil'.


It's common for ideologically committed users to see HN as being aligned against them. But this perception is in the eye of the beholder, i.e. it's a cognitive bias. Plenty of comments make opposite claims about HN; the difference isn't in HN but in what the commenter identifies with.

Edit: I've written about this in plenty of places: https://hn.algolia.com/?sort=byDate&prefix&page=0&dateRange=.... Not all of those posts are about perceptions of political bias; some are about perceptions of astroturfing. But the two phenomena are variations of the same thing.

We detached this comment from https://news.ycombinator.com/item?id=13930232 and marked it off-topic.


Why did you mark my comment off-topic? Wasn't it directly related to topic discussed in the parent thread?


Your comment was about Hacker News, not the topic. Also, it was unsubstantive, and those are always off-topic.


No, that's not true. HN has lots of posts that are highly critical of the US government and also US corporations.

By the way, if you don't like liberalism (democracy plus free market economics) then what are you for?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: