Hacker News new | past | comments | ask | show | jobs | submit | em3rgent0rdr's comments login

The US still taxes you. There's also an expatriation tax.

The government should just write its tax law as a spreadsheet.

> multi-core programming in 2006 was absolutely anemic

OpenMP was around back then and was easy.


tdeck is making fun of the way the article is written.

More precisely "pulse width" would be a time, while "duty cycle" would be a percent.

And while when going from 0% to 50% duty cycle it could be said that "a square wave with a low pulse width will sound thinner than one with a high pulse width", however, once you go past 50% duty cycle the situation reverses. So a 25% duty cycle would sound almost identical to a 75% duty cycle...the amplitudes of their Fourier transform components would be identical.


> almost identical ... components would be identical

I'm having a tough time reconciling how the former could be almost identical while the latter is identical. I guess the former involves a human listening through a speaker which has asymmetric imperfections (maybe the speaker moves outward more easily than it moves inward, or a DC offset in the signal leads to compression in the high-excursion side that doesn't exist on the low-excursion side, etc.) whereas the FFT readout doesn't necessarily have a speaker in the system at all.


25% and 75% would sound identical alone, but in a mix there often are interplays where it can create a difference. An easy way to hear it is to run two synced oscillators, say a square and a saw, with sharp attack. The resulting sound should be sufficiently different, one side would dampen the attack compared to the other. Furthermore, I think in hardware synths and those that emulate them changing pulse width can cause the module to implicitly shift the signal up or down to ensure consistent average voltage, further complicating things. I am curious what you mean by compression.

Good point. If I have 2 oscillators, and no control over their phase as they mix, then an option to choose 25% vs 75% for one of them would at least offer some variation instead of none.

As for compression, this [0] is a good intro. Most commonly it is applied to a signal deliberately to achieve a desired outcome, but I'm referring to a (generally) undesired speaker nonlinearity [1] near its maximum power handling capacity.

[0] https://en.wikipedia.org/wiki/Dynamic_range_compression

[1] https://marshallforum.com/threads/what-exactly-does-speaker-...


Thanks! I’m very familiar with the first one, but never thought of the second one actually.

Different linearity properties on the positive and negative side would be pretty bad for a speaker, but possible. In the case of a square wave, non-linearity would be identical to a fixed amplitude change though, possibly with a DC bias.

Based on the gameboy wiki I looked up, the phase of the 25% duty and 75% duty are such that they are inverse of each other, seemingly eliminating the possibility of combining the two for different waveforms.


I'm always a bit saddened to see that a separate chip is the go-to method to interface with USB. Unfortunately USB is an incredibly-complex protocol that it seems anything beyond a basic V-USB running USB 1.1 at low-speed is generally not doable without specialized hardware and a significant software stack. Meanwhile a protocol like SPI is ridiculously simple...the minimum hardware needed is a shift register that can be clocked fast enough. I miss how desktop and labtops used to have an exposed serial and parallel port, which could communicate at this low level. I often wonder if instead of USB existing that we instead stuck with UART, I2C, or SPI multidrop (using a small set of standard clock rates) for simple peripherals (maybe over a single connector like the 4-pin JST SH cable for Stemma QT, Qwiic and Grove) over a short distance, and then jumped to IEEE 802.3 Ethernet links for data-heavy peripherals like monitors and external drives. Then instead of having to have separate support for USB and Ethernet, you just would support Ethernet links.


> Meanwhile a protocol like SPI is ridiculously simple

Yes, it is. It was intended to require as little silicon as possible to minimize the cost to the transistor budget. SPI doesn't contemplate power supply, hot-plug, discovery, bit errors, or any other of a host of affordances you get with USB.

I think there is some value for software developers to understanding SPI and the idioms used by hardware designers with SPI. Typically, SPI is used to fill registers of peripherals: the communication is not the sort of high level, asynchronous stuff you typically see with USB or Ethernet and all the layers of abstraction built upon them. Although there is no universal standard for SPI frames, they do follow idiomatic patterns, and this has proven sufficient for an uncountably vast number of applications.


I feel like you could support hot-plug, discovery, and bit errors with a protocol orders of magnitude simpler than USB, something you could bitbang on an ATTiny45. (And without Neywiny's list: 'bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words.' Those incompatibilities are mostly just people taking shortcuts.) And, unless you're talking about USB-C PD, which isn't involved here, the power-supply question isn't really related to the software complexity; it's just a question of making the power and ground traces on the USB-A plug (and the corresponding parts of other connectors) longer than the data traces so they make contact first.

You couldn't make it quite as simple as (a given flavor of) SPI, but something close to I²C should be feasible.


> something close to I²C should be feasible.

That'd be https://en.wikipedia.org/wiki/System_Management_Bus


That's just I²C with some features disabled. It doesn't add any of the things I²C is lacking.

Well, okay, I guess SMBus ARP kind of does. Thanks!


I never thought about making hot pluggable SMBus peripherals. It's an interesting idea. Many motherboards even have headers broken out, and of course operating systems already have drivers for many SMBus peripheral types. LoFi USB.


We have hot pluggable I2C at home. Every HDMI port (and pretty much every DP port with a passive adapter) has I2C on the DDC pins. The port also provides 5V 50mA so your MCU doesn't need external power. Example: https://www.reddit.com/r/raspberry_pi/comments/ws1ale/i2c_on...


Does HDMI use SMBus ARP?


Nope. Software on the PC side reads EDID data from a fixed address. The EDID can list other addresses

Perhaps there is a place for a simpler alternative. My comment was pretty tangential to this discussion about the merits of SPI vs USB vs whatever. My point is that I believe some benefit can be had by software developers in understanding how components can be integrated together using a primitive as simple minded as SPI. I used the qualification "some" again, as well. I don't offer any revolutionary insights, but if you survey how SPI is used in practice, you'll learn some things of value, even if you never use SPI yourself.


I think there's something like a dual of the uncanny valley when it comes to protocol complexity vs adoption. Really simple like UART, I2C, or SPI, and engineers will adopt it on their own. But once you start wanting to add some higher level features, engineers would just as soon reinvent their own proprietary schemes for their own specific needs (the "valley"), and the network effects go away. So to create a more popular protocol you end up with a design committee where everyone piles in their own bespoke feature requests and the thing ends up being an ugly complex standard that nobody really likes. But at least it gives it a shot at wider adoption to prime the pump of network effects. (Or maybe it's more akin to how Java beats the Lisp Curse by using the social effect of having to raise an army to do anything?)


The main reason it's near impossible to bit-bang USB is that all devices to are required to use one of a few fixed clock rates (1.5 MHz, 12 MHz and 480 MHz), unlike SPI and I²C which allow variable/dynamic clock rates.

If you simply remove this restriction, bit-banging USB would become trivial, even with all the other protocol complexity.

Though, I think USB made the right call here. The requirement to support any clock speed the device requested would add a lot of complexity to both hosts and hubs.

Only supporting a few fixed clock-rates makes certification and inter-device compatibility so much easier, which is very important for an external protocol. Supporting bit-banging just isn't that important of a feature, especially when the fixed clock rates really are that hard to implement on dedicated silicon.


USB generally needs a crystal or at least a ceramic resonator to meet its timing precision specs, though apparently Dmitry is getting by without one here. This commonly adds extra cost to USB implementations, because dedicated silicon isn't sufficient; you also need some precisely tuned silicon dioxide or similar.

In V-USB, usbdrv/.[ch] contains 1440 unique lines. The bitbanging stuff is mostly in .S *.inc, so correct me if I'm wrong, but I think this is roughly the non-timing-related complexity imposed by USB protocol stack. (This division is not perfect, because there are things in e.g. usbdrvasm.S which have nothing to do with bitbang timing, but I feel like it's a reasonable approximation.) The remaining complexity in, say, examples/hid-mouse/firmware/main.c is only a few dozen lines of code.

And that's a USB device. Implementing a USB host is at least another order of magnitude more complexity.

You definitely don't need 1000+ lines of code to implement, say, the PS/2 mouse protocol. From either side.

So, while I agree that a lot of the difficulty of bitbanging USB results from its tight timing constraints, I don't agree that what's left over is "trivial".


USB served its intended purpose extremely well, being maliciously complex is a feature. The goal first and foremost was to put the PC at the center of the future digital world, to avoid a timeline where FireWire or something like FireWire linked our devices together w/o a PC as an intermediary.

I would love a protocol you outline, but could you use SPI as the physical layer and put the rest on top?


I think, with appropriate adjustment of pin lengths (so that GND is always first to connect and last to disconnect), Apple Desktop Bus is hot-pluggable. There are even adapters for this purpose, made by hobbyists. May want to install a wee bit of software on the host, to occasionally force a rescan of the bus, but not strictly necessary.

Isn't that basically what USB is? At least if you stick to USB 1. Obviously, since that time, it's expanded to cover a wider range of capabilities. It's a half-duplex serial line, just like I2C. Unlike I2C, it's asynchronous, like a UART.


No, even USB 1 is a ridiculously complex networking protocol stack running on top of that half-duplex serial line.


And for a computer like the one here, even having hot-plug is an unnecessary luxury.


Very true. And dealing with bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words, the list goes on. But when you see "USB 1.1 device" you know a large majority of what it can support and what it'll do.


These (and hotswapping/handshaking) are small quibbles that could be addressed with a simple standard subset of SPI for human-plugable devices.


While true it does seem like the flexibility allows for good optimizations. For example, a lot of devices don't need addresses or have memory maps etc. So, sadly while it's a pain to deal with, it does make things very fast and efficient.


Since when does USB have error correction/detection?


USB bulk packets have a 16 bit CRC.


Interesting. I recently tested a 6ft USB3 cable and an attached drive. The transfer of a 1TB file failed a few times (not sure of the details). This is strange, since the cable couldn't have been that bad (?) and the 16-bit CRC should have caught those errors (assuming an error will trigger a resend of data). Any ideas what the issue could have been? Does Linux provide a way to view the error rate?


> the 16-bit CRC should have caught those errors

I believe your confidence in the 16-bit CRC is excessive. There is a 1 in 65536 chance of a 16 bit CRC failing for certain types of corruption in 512 byte bulk USB packets, and there are about 2 billion packets in a 1TB transfer. If the BER is high, corruption of the transfer is not surprising.

A 6ft cable should be fine, assuming it is well designed, manufactured correctly, in good condition, and not in close proximity to high noise sources, such as SMPS. If any of those factors are compromised the BER will increase, and you will then be testing the rather limited capabilities of 16 bit CRC.

USB4 has 32 bit CRC for data payloads for a reason. In the mean time, the #1 thing you can do is use short, high quality cables.


Yes, perhaps you are right, that's why I was asking about a way to see the error rate. I think it is also a failure of the USB standard then, to allow a transfer to happen if the BER (according to the CRC) is so high.

Glad to see that USB4 is fixing things but one thing I worry about is if this 32 bit CRC is a mandatory part of the standard that cannot be turned off or ignored by manufacturers. Especially since it apparently is not transparent to users.


https://wiki.wireshark.org/CaptureSetup/USB says "Software USB capture captures URBs (USB Request Blocks) rather than raw USB packets." So Wireshark couldn't give you the CRC, which would be stripped away.

You could hypothetically use a really-really high bandwidth oscilloscope (like 2 GHz to view 480 MHz USB HS signals), but those are expensive. So you would have to resort to using external USB sniffer...out of curiosity I found someone made a sniffer that is basically a USB-capable microcontroller plus an FPGA and a USB PHY: https://hackaday.com/2023/06/13/cheap-usb-sniffer-has-wiresh...


So you want to replace a USB PHY with a serial to Ethernet converter and an Ethernet PHY.

The reality is that the simple protocols like SPI and I2C just are not good enough. They aren't fast, the single-ended signal scheme makes them very sensitive to noise, and there is no error correction. These protocols make sense and work extremely well for their intended purpose: connecting ICs on a PCB. If you expose an unterminated port to the outside world, all bets are off.

These protocols and variations thereof are still in heavy use in modern PCs. But they're internal busses, as the protocols intend.

I haven't looked closely at the USB spec, but I imagine the main problem with bit-banging is simply the speed required. You have to have dedicated hardware because no microcontroller is fast enough to toggle the pins while also running the software stack to decode the protocol and manage error correction.

You can run into this exact problem bit-banging I2C. With a 20MHz CPU, the maximum clock speed you can get is about 250KHz. Just a bit more than half the typical maximum rate of 400KHz. You can absolutely forget about the 1MHz version.

PHYs exist for one very good reason: it is vastly cheaper to offload comms protocols to hardware. Without that, you have to over-spec your CPU by quite a lot to get enough resources to manually manage communication. This is why every modern microcontroller contains hardware for I2C, SPI, serial, etc.

In summary, the simple serial protocols like SPI and I2C and UART are just absolutely terrible choices for external peripherals. They can't operate at reasonable speeds, they can't tolerate long cables, they can't tolerate noise. The nature and design of these protocols (excepting RS232 which is not UART) means that they cannot be used this way. There's no change to the spec you could make to support this without reinventing USB.


UART over LVDS is still quite simple and works well for long cables and it tolerates ground differences and noise well.


Yeah, and LVDS is something that really cheap ice40 FPGAs support with not much of an area cost.

(In my original comment I should have said to use differential signaling for going off-board.)


USB is also tough to bitbang also it has pretty strict timing requirements. Compared to something like i2c where the clock only advances when the pin is explicitly toggled.


You may have intended to say SPI. I²C does support "clock stretching" to delay until ready, but that's only in one particular case; otherwise the I²C clock advances all the time at whatever your baud rate is, not only when a pin is explicitly toggled.


That depends on if you are the controller or the target, no? My usual use case for i2c is for talking to some peripheral from a microcontroller, where I am acting as the clock source. Clock stretching applies to the target side, at least when you are talking about SCL.


Hmm, on thinking about it further, I guess I was pretty comprehensively wrong. Thank you.


I work in the AV industry. RS-232 is still the king for control signals between devices, even on brand new hardware that costs >10K USD. TV screens for signage/conference rooms often have RS-232 for more versatile control than HDMI-CEC. Higher bitrate than 9600 BPS is often not needed. The most common connector consists of three-pin screw terminals (Tx, Rx, GND), although these days most installations have at least one RS232-to-USB adaptor somewhere. And for larger rooms, RS232 is bridged over Ethernet.

This was a bit of a surprise when I started, but then I realised that many installations are decades old, with components having been replaced individually.


The article goes through a long list of 8 pin chips but ignores the very popular $0.10 CH32V003, which has 2k RAM and 16k Flash running at 48 MHz and 1 CPI -- or the new CH570 (I have a dev board on the way) which is also $0.10 in SOIC8 but now runs at 100 MHz with 16k RAM and 256k flash and has USB and a 2.4 GHz packet radio.


CH32V003 is not available on mouser.com or digikey.com

Googling for "CH570" produces results about tractors. Got a link?

EDIT: found info here: https://www.cnx-software.com/2025/04/02/10-cents-wch-ch570-c...

8-pin part lacks USB AND only has 3 I/O pins. It would be disqualified due to being too I/O-poor. Wasting 5 pins out of 8 is a joke!

As for the old one, CH32V003: 48MHz is slower than the STM's 150MHz, half the flash, 1/4 the RAM. It is still not the best option.

I did update the article with them, though :)


> 8-pin part lacks USB AND only has 3 I/O pins

But you get radio (BLE in the CH572 version), which means you don't need USB.

My comment was not that you didn't choose them but that you didn't consider them.


You do NOT get BLE in the 8-pin part

I just considered them and added them to my writeup :)


You get the radio in the 8 pin part. That's the "ANT" connection, pin 8, one of the three "wasted" pins (along with the crystal) you complain about.


Yup. All of which would be useless for this project. :)


I am imagining a world where Bruce and Dmitry team up on a project together, maybe we need CNLohr as arbiter.

I bow to both Dima and Charles on making practical physical hack projects!

I'm more of a software guy. I guess my biggest hardware project (other than building a 3D printer from someone else's plans) was in 2012 or so making a PID-based home heating controller using an Uno with a custom perfboard shield with a thermistor for measuring the air temperature and a 433 MHz transmitter for talking to a set of Jaycar 240V remote controlled power outlets to control primarily a 2400W oil column heater, plus a fan pointing at it which I turned on when it was at a high duty cycle. Plus switching the home water heating on and off on a fixed schedule just as a side thing from the same board. I used that in Wellington for 3 winters before moving to Russia for a few years and saved NZ$500 per winter on electricity bills compared to previous years using a mechanical Honeywell thermostat to control the heater.


That is really cool. Sounds like it could be turned into a great product. It is funny that Nest basically added 3 if statements to a thermostat over a reed thermal switch and became a multibillion dollar company for it.

It worked really well. That Honeywell thing let the temperature cycle over maybe a 2º C range. I was sampling the thermistor 100 times in about 1/10th of a second (analogRead() on Arduino) and then doing exponential averaging each second with a 32 second time constant (i.e. `temp = temp - (temp >> 5) + new_reading`) and averaging and getting a nice stable temperature reading that moved smoothly in 0.01º C increments. Turning the heater on and off at maximum every 30 seconds the room temp usually stayed within ±0.03 C of the set temperature -- not precise on an absolute scale but detecting and correcting small changes accurately.

I was actually thinking of trying to make a product, especially once Flick Electric started up with electricity prices changing every 30 minutes based on the wholesale rate plus a 2c margin. You could (still can) get the current 30 minute wholesale price at your local substation from electricityinfo.co.nz with a simple http query, so you can build something to make intelligent decisions.

But then I got a job overseas and dropped the project...


2.4Ghz. I was really wondering about the 24Ghz.


oops .. my brain of course knows but my fingers didn't in this instance. Fixed.

I think it's compatible with the old nRF24 chips -- I'll test when mine arrives in a week or so. The CH572 version has BLE5 ... I think the same hardware but including a software stack.


Damn, only one seller on Aliexpress right now and no dev boards. Where’d you find yours?


The official WCH store on Aliexpress. Stock is coming in slowly 10 at a time and then selling out in 30 minutes or so, but it is coming in.

https://www.aliexpress.com/item/1005008743123631.html


There are plenty of MCUs that will work as a USB device, they were just ruled out by the package restriction.


Well yeah, nowadays high-end micro-controllers may have an integrated USB HS PHY (notably STM32F7's and the MIMXRT1060 used in Teensy 4, and many others), but the basic cheap attiny-like or ice40-like hasn't and most usually require going through an external PHY. I've been wanting to get into using the CH32V305 cause it is in a hand-solder-friendly TSSOP-20 package and has integrated USB HS PHY but I hear it doesn't have a software support and I don't see it on microchip/digikey/etc. Though we may soon have easy access to 20-cent microcontrollers with USB HS, but still the protocol feels incredibly complex and way overkill for simply interfacing a peripheral to a computer.


Any old SAM D21 will do USB. What software support are you looking for? Integration in TinyUSB?


I still install brand new computers with serial ports! Dell sells us OptiPlex towers and we occasionally order them with a serial card to connect to legacy scientific instruments.


I bought a Lenovo mini thinkstation with a serial port because I thought it would be cool. But I don't even know what cool stuff 8 can plug there, except for a serial console


If there has to be a chip to facilitate coms, I feel like you could go on a similar hunt for 8 pin microcontrollers that could serve that function and maybe also provide some extra functionality. It would be interesting if it could connect to a PC though a DDC connection.


USB is something that is possible to understand, and apparently bit-bang at at least low speed 1.5Mbps. Probably full speed 12Mbps as well on a modern MCU. I don't understand it, but one can.

In that sense it's like SPI, or perhaps more like CAN or SD: when you don't understand it, you reach for someone else to have done it for you, but you can choose to understand it and once you understand it you can implement it.

If you're the slave you have tight timing requirements but you only have to respond with certain fixed bit patterns to certain other bit patterns. If you're the master, you can do more things concurrently because the slave won't notice a little jitter in how often you poll it, but you have the problem of dealing with a wider variety of slaves that can be connected.


Oh, USB 1.1 at the transport layer and lower is not that difficult.

But there is more complexity on higher layers. USB HID (mice and keyboards) is often the first you'd want but it is special in that it allows a device to describe its own packet format in a tokenised data description language. The device only has to send an additional blob when asked, but the host has to parse the contents of that blob and use the result to parse the device's packets.

And of course, every time there is complexity in a protocol and there are multiple implementations of it, there is more opportunity for them to be incompatible in very subtle ways. This phenomenon has caused for example that some gaming keyboards with N-key rollover that work perfectly on MS-Windows without any special drivers have been rejected outright by Apple or Linux hosts. (I hope these issues have been fixed now, but I'm not sure).


I thought there was also a mandatory fixed-layout "boot" profile for mice and keyboards? There was some controversy because vendors interpreted it as only being allowed to support the boot profile, resulting in most USB keyboards having 6-key rollover maximum.


In a nutshell, yes. There's a flag for it (actually "subclass") for indicating being able to switch to the boot protocol. Some hosts, especially BIOS:es, did check that flag but never sent the command to switch.

Keyboard vendors catered to the lowest common denominator because it was essential that users be able to enter the PC BIOS at boot.

BTW. Apple chiclet keyboards (before the "Magic") got an interested workaround to this problem for its proprietary Fn key. It uses a variation of the boot protocol but only a 5-byte array (5KRO). When the Fn key is pressed, the sixth byte will contain a code that is otherwise an error code if interpreted as the boot protocol.


I mean technically you have all of these interfaces on a raspberry pi.


Yes, but the RPi is not yet quite a standard desktop/labtop workstation. And it would be nice if the RPi exposed its SPI, I2C, and UARTs over a standard mini connector like JST-SH connectors to ease plugging in peripherals (like is done with Stemma QT, Qwiic and Grove).


Tad more than 8 pins on a pi


i mean... someone did try this with i2c. a couple of dead computer companies shipped a bus that i forget the name of, based on this concept. its descendant is the vga hdmi control channel spec (which was implemented as a de facto separate standard but is very similar)

the name is escaping me


ACCESS.bus by Philips (who developed I²C) and DEC, and the DEC variant SERIAL.bus (with different voltage levels) used by their keyboards and mice for a little while.


And a variant of ACCESS.bus lives on as the extremely widely adopted DDC that is a part of HDMI, DVI, and VGA.


Why are lithium ion phone and labtop batteries still legal considering their saftey risks? There are safer battery chemestries that aren't quite as energy-dense. But phones and laptops were capable-enough 15 years ago and performance-per watt is constantly improving. Sure, we might not be able to light up all the pixels on our screen and stream gigs of data constantly and won't be able to train AI models when our labtop is not plugged into the wall, but we sufficed just fine on the performance of last-decade's mobile devices.


By that logic, we would have to ban cars, gas stoves and even kitchen knives.

Everything has risks — its about managing them. Lithium ion batteries are widely used because their benefits outweight the risks when handled properly.

Its like saying, “Why are candles still legal? They can start fires.” Well, because people know how to use them responsibly.


> considering their saftey risks

The safety risks are marginal and you interact with plenty of other things/systems daily that are at least as dangerous.

> here are safer battery chemestries that aren't quite as energy-dense ^ that's the answer.

> But phones and laptops were capable-enough 15 years ago They absolutely weren't.

> we sufficed just fine on the performance of last-decade's mobile devices. I don't want to suffice.

All that said, I do think battery research is probably one of the most important things "we" can be doing (and energy storage in general), so I'm all about putting in the money and time to find improvements.


Because the actual risk is so far overblown.

Why do we still let kids go outside when there are so many kidnappings?

The samsung battery debacle around the note 7, which made headlines for weeks, was from 0.003% of phones catching fire.


Phones and laptops were not capable enough 15 years ago for what we expect of them today.


Unfortunately, that 'sell back to grid' price is often only a small fraction of the ~17 cent/kWh purchase price from the grid. The battery is less for backup but is instead to help make economic sense for your home, by storing the excess you produce when it is sunny...


Commercial solar home battery use safer battery chemistries which don't experience thermal runaway like lithium ion labtop batteries do..


Yeah. Commercial home solar battery power as I understand is done with safer chemistries, such as lithium iron phosphate, which while they have a lower energy density (which is not a big downside for a stationary building) don't have the thermal runaway issues that labtop lithium ion batteries have. I wouldn't want to live next door to the DIY labtop battery array enthusiast.


He seems to be doing it fairly safely by having it housed in a building a whole 50m away from the main dwelling. A fire from there could spread to the house or elsewhere but it's no longer a metal fire so it's a lot easier to deal with and just contain the fire in/around the shed. I'd probably add a nice gravel buffer around it to help that and live in a reasonably well hydrated part of the country so there's not as big a fire risk from embers.


Great if you are a skilled electrical engineer who owns a bunch of land somewhere that doesn't have any fire risk.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: