Hacker News new | past | comments | ask | show | jobs | submit | jeff-davis's comments login

Sincere question about communism:

Let's say hypothetically that the distribution of stock ownership was more even across the population, and variance was largely (but not completely) due to length of time in the workforce. And further, that the stock owned by workers is a large enough block that they effectively have controlling shares at many companies. Maybe I'm talking about a different universe, but please imagine it for a moment.

Would that hypothetical world be kind of like communism in the sense that the workers own the means of production? If not, why not?


Yes, it would be so. In fact, if automation starts taking jobs from people on a larger scale, it would be a way to reduce economic inequality.

That concep/idea is called: Universal Basic Dividend.


HN is binary. If not laissez-faire capitalism, lets talk about communism.

Let's talk about non-hypothetical real world, like Nordic model or Europe in general. Capitalism produces growth, redistribution (free healthcare, education, etc. spreads the wealth).


That's just a cooperative, and those exist in capitalism: https://en.wikipedia.org/wiki/Cooperative.

You can found one today if you want, don't need communism for that.


So, actually that happened once already, when the sowjet system broke:

https://en.wikipedia.org/wiki/Voucher_privatization

The thing is, it began like you described - but most people didnt see the value in it and sold it immediately; thats the reason why you have some oligarch caste, as this mainly got fueled by this process (sure, not the only source of their wealth but the understood back then how to play the game)


It's like communism, but it's not communism, at least it's not the ultimate form of communism. IMHO, the key difference is that in communist theory, the workers own the means of production through a worker-class-ruled society, not as individuals, and it's still within the framework of a market-oriented economic system.

why don't work?

first, when equity dispersion is accompanied by the dispersion of decision-making power, it can lead to excessively high decision-making costs, reduced efficiency, and lack of competitiveness. However, when equity is dispersed but decision-making power is concentrated (i.e., a dual-class share structure), the interested parties with decision-making power tend to skew the benefits towards themselves.

it's not compatitalbe with the market-oriented economic system.

second, "Altruistic collectivization" is entropy-increasing.

For example, if a company does what you said, when a crisis occurs, it's hard for it to survive. Its products may fail to be sold, and the company may go bankrupt. Other companies can also go bankrupt, but their failure is part of the capitalist system. When they fail, the system as a whole doesn't necessarily fail. But when a "communist-style" company fails, it dies, and it won't come back easily.

More importantly, this is a world dominated by capitalism. It's not just an ideology of companies, it's an ideology of states. For example, when the IMF/WHO comes to a country that is suffering from a global capitalist economic crisis, the IMF will kindly offer its help, but with some conditions, such as requiring you to also accept its help in reforming your economic system.

Although there are still some similar commercial entities that are "owned" collectively, most of them are stagnant. They can survive, but they cannot expand.


Longjmp is used by Postgres for transaction aborts. With C, there's not really a better option available.


That's not from within signal handlers, though. (i.e. it relates to this specific longjmp discussion but not the root post re. exceptions on other threads.)


It unfortunately is used from within signal handlers, albeit only in specific cases (SIGFPE). There used to several more, but we luckily largely cleaned that up over the last few years.


Meh. Well. Good to hear on the cleanup. Didn't know it used to be different :/

Re. SIGFPE, to be fair, it feels a bit like the "asynchronous vs. synchronous abort¹" thing on CPUs; synchronous aborts are reasonably doable while on asynchronous aborts you're pretty much left with torching things down far and wide.

(SIGFPE should hopefully be synchronous; it's in fact closely connected to sync/async CPU aborts...)

[¹ frequently also called exceptions, depending on the CPU architecture, but this post already uses "exception" for the language level concept]


> Meh. Well. Good to hear on the cleanup. Didn't know it used to be different :/

If you want to be scared: Until not too long ago postgres' supervisor process would start some types of subprocesses from within a signal handler... Not entirely surprisingly, that found bugs in various debugging tools (IIRC at least valgrind, rr, one of the sanitizer libs).

> Re. SIGFPE, to be fair, it feels a bit like the "asynchronous vs. synchronous abort¹" thing on CPUs; synchronous aborts are reasonably doable while on asynchronous aborts you're pretty much left with torching things down far and wide.

Agreed, I think it's quite reasonable to use signals + longjmp() for the FP error case. In fact, I think we should do so more widely - we loose a fair bit of performance due to all kinds of floating point error checking that we could set up to instead signal.


"It is our responsibility to chose the right tool for the job."

That perspective doesn't work well for database products, in my opinion. There is a huge pressure for databases to evolve with your business and applications and to adapt to whatever your throw at it.

Swapping out a database product is less like changing tools and more like changing a foundation. You can't do it every time a new problem arises.

That's not to say you can't use a few different products if that makes sense. But that has its complications.


Simon was one of the first people I met in the Postgres community, perhaps in 2007 at the first PGCon that I attended. We've attended many of the same conferences in places around the world, and I've occasionally had the chance to explore those places with him. He was always kind to me and helped me immensely. I was proud to have the chance to co-author a major feature with him. The last time I saw him was this past December.

Very sad.


What are the barriers to doing so?


What are some killer apps for FPGAs? What major products do they enable?


They enable a lot of crazy defence products. A well known german product for example is the Iris-T from Diehl Defence. Highly accurate and exceptional engineering. But I guess FPGAs are in most defence products nowadays. I think the biggest reason is that you can build/verify your own hardware without having to go through the expensive ASIC manufacturing

Edit: I just realized that these are some literal killer apps. That wasn't even intentional, lol.


> But I guess FPGAs are in most defence products nowadays.

Yes.

> I think the biggest reason is that you can build/verify your own hardware without having to go through the expensive ASIC manufacturing.

Plus, you don't give out your secrets to fabs, too. Design, verify, launch, discard, without the need for signing an NDA.


"Plus, you don't give out your secrets to fabs, too."

That's a perfect answer to my question, thank you.

Also many other great answers in this thread, but I don't have much to add.


It also probably makes it easier to prevent adversaries from being able to delid/reverse engineer products. When using FPGAs you don't even need to have the firmware/gateware on or near device until it's in use, which would help prevent any sensitive trade secrets from making it into the wrong hands.


"Jones, fire on the bandit at 270."

"Yessir."

"Jones, why isn't that SAM in the air?"

"Sarge, it's flashing the bitstream. The progress bar says 60%."


Any sensor that captures a ton of data that needs realtime processing to 'compress' the data before the data can be forwarded to data accumulator. Think MRI or CT scanners but industrially there are thousands of applications.

If you need a lot of realtime processing to drive motors (think industrial robots of all kinds), FPGAs are preferred of micro-controllers.

All kinds of industrial sorting systems are driven by fpgas because the moment of measurement (typically with a camera) & the sorting decision are less than a milisecond apart.

There are many more, it's a very 'industrial' product nowadays, but sometimes an FPGA will pop up in a high-end smartphone or TV because they allow to add certain features late in the design cycle.


They enable a bunch of niches (some of which do have a large impact), as opposed to having a few high-volume uses. Basically anything where you really need an ASIC but you don't have the volume to justify an ASIC (and also have the requires large margins for such a product to be viable). Custom RF protocols, the ASIC development process itself, super-low-latency but complex control loops in big motor drives, that kind of thing. You'll almost never see them in consumer products (outside of maybe some super-tiny ones which aren't useful for compute but just do 'glue logic') because they're so expensive.


What you're describing is correct for the top-end FPGA products (they're in every 5G base station, and almost every data centre has thousands of them rerouting information), but the low-end ($10 or less) 2k LE FPGAs are in a hell of a lot of products now too. They're fantastic for anything where you need a lot of logic that executes immediately/concurrently (vs sequentially as would with a microcontroller) in a tiny package. Think medical devices, robotics, comms devices, instrumentation, or power controllers.

I'm pretty sure there's an FPGA in most consumer devices now, but as you say they're there for some sort of glue logic - but that's a killer niche unto itself. Schematics can shift and change throughout a design cycle, and you only need to rewrite some HDL rather than go hunting for a different ASIC that's fit for purpose. It's a growing field again as their cost has come right down. They're in the Apple Vision headset, the Steam Deck, modern TVs, and a host of small form factor consumer computing products.


> they're in every 5G base station

Just a tiny nitpick to your great answer but Nokia's 5G base station stuff (Reefshark) is built around ASICs. I would expect others do the same. There's some reasoning at https://www.electronicdesign.com/technologies/embedded/artic...

https://www.nokia.com/about-us/news/releases/2020/06/15/noki...


The ReefShark ASIC sits alongside an FPGA which acts akin to an IPU. I know only because I played my own small part in the design. It was originally meant to be entirely FPGA-based, but they got hit with some severe supply constraints by Intel and Xilinx, which is why cost keeps getting discussed. Prices have dropped back down to stable numbers again since mid-last year, but at the time ASICs ended up being more affordable at the volume they're doing (demand spiked mid-project due to the removal of Huawei networking equipment).


We (outside Wireless) heard the Intel silicon didn't perform/yield and the original designs became infeasible, prompting a sudden mad scramble. I didn't realise it was originally planned to be FPGA-based. Interesting, thanks.

Very glad to hear things have improved.


> I'm pretty sure there's an FPGA in most consumer devices now,

I can’t think of the last time I saw an FPGA on a mainstream consumer device. MCUs are so fast and have so much IO that it’s rare to need something like an FPGA. I’ve seen a couple tiny CPLDs, but not a full blown FPGA.

I frequently see FPGAs in test and lab gear, though. Crucial for data capture and processing at high speeds.


Low-latency (e.g. less than 20 lines) Videoswitchers/mixers. There's a huge amount of data (12Gbps for 4K/UHD) per input, with many inputs and outputs, all with extremely tight timing tolerances. If you loosen the latency restrictions you can do a small numbers of inputs on regular PCs (see OBS Studio), but at some point a PC architecture will not scale easily anymore and it is much more efficient to just use FPGAs that will do the required logic in hardware. It's such a small market that for most devices an ASIC is not a option.


Blackmagic's whole gear line is based on Xilinx FPGAs. Whatever product of them you see, if you tear it down, it will almost always be nothing more than SerDes chips, I/O controllers and FPGAs.


Anything where you wish you could have an ASIC but you don't have the budget for custom ASIC, and where using smaller chips either makes for worse Bill of Materials or takes up more space.

They are used everywhere, including some very small ones I've seen used purely for power sequencing on motherboards - usually very small FPGA with embedded memory that "boots" first on standby voltage and contains simple combinatoric logic that controls how other devices on motherboard are getting powered up faster than any MCU can do it - while taking less space than discrete components.

Glue logic, custom I/O systems (including high-end backplanes in complex systems), custom devices (often combined with "hard" components in the chip, like ARM CPUs in Zynq series FPGA), specialized filters that can be runtime updated.

Lots of uses.


They're used in places that require real time processing (DSP) of raw digital signals at very high (several hundred Mhz and more), where you cannot afford to miss a sample because of latency from a microcontroller (uC). I think even some PCI devices use them for this reason, and it allows you to update firmware whereas ASIC doesn't

A while back I wrote an entire FPGA pipeline to recalibrate signals from optical sensors before they were passed on to the CPU. Doing this allowed us to keep up processing speed with acquisition, so it was real time. A lot of FIR filters and FFTs. But my proudest achievement was a linear interpolation algorithm which is fairly high level and tricky to implement on FPGA, which is more geared towards simpler DSP algorithms like FIR filters, and FFT (not simpler but so much effort has got into making IPs it effectively is because you don't have to implement it yourself)

But other than that, for raw bulk compute GPUs are kicking their butts in most domains.


To give you an example, these are often used in CNC machines.

Before you had to have:

A. a PLC that ran logic with real time guarantees to tie everything together. The PLC is often user-modified to add more logic.

B. Decoders that processed several mhz encoder feedback signals, somewhere between 3 and 10 of these.

C. Something that decides what to do with the data in B

D. Encoders and motor driving, also being output at several mhz (somewhere between 3 and 10 of these as well)

Among other tasks.

These were all separate chips/boards/things that you tried to synchronize/manage. Latency could be high. But if you are moving 1200 inches per minute (easy), 100 milliseconds latency is equivalent to losing track of things for 2 inches. Enough to ruin anything being made.

Nowadays it is often just an FPGA hooked up to a host core.

(or at a minimum, a time-synchronized bus like ethercat)


- ASIC emulation and prototyping

- High-frequency trading (executed in-fabric)

- Niche real-time video devices, etc.

- Cryptocurrency mining

- Real-time motor pulse generation for robotics

- Custom NICs and HPC devices

- RF signals processing (radar, guidance, etc.)


Products with PCIe (PCI Express) and high speed interfaces like 10G Ethernet, SATA, HDMI, USB 3.0 and higher, Thunderbolt.

Most of the ASICs with these SerDes interfaces are not for sale on the open market, only for OEM who buy MOQ of millions.

Take for example the Raspberry PI SBCs. The Raspberry Pi only got PCIe very late (compute model 4), influencer Jeff unlocked them with a lot of difficulty https://pipci.jeffgeerling.com but you still can't buy these cheap microprocessors from Broadcom.

The reason is that no cheap PCIe chips are available for hobbyists and small company buyers (below a million dollars).

'Cheap' FPGA's starting at $200+ where and still are the only PCIe devices for sale to anyone. If you want to nitpick, a few low speed Serdes are available in $27 ECP5 FPGA's, but no 10 Gbps and higher.

Another example, I sell $130 switches with 100 Gbps switching speeds and PCIe 4x8 and QSFP28 optics. But you can't buy the AWS/Amazon ASIC chips on this board anywhere, nor their competitors chips from Broadcom, Intel, MicroSemi/Microchip, Marvell.

I went as high as Intel's vice president and also their highest level account manager VPs and still got no answer on how to buy their ASIC switches or FPGAs.


The core of modern oscilloscopes is often an FPGA that reads out the analog-to-digital converters at ~gigasamples/s and dumps the result into a RAM chip. Some companies (Keysight, Siglent) use custom chips for this, but FPGAs are very common.


From a consumer-facing perspective, FPGAs have enabled a golden age of reasonably affordable and upgradeable hardware for relatively niche tech hobbies.

* Replacement parts for vintage computers * Flash cartridges and optical drive emulators for older video game consoles * High-speed, high quality analog video upscalers

Many of these things aren't produced at a scale where producing bespoke chips is not really viable. Using an FPGA lets you build your product with off the shelf parts, and lets you squish bugs in the field with a firmware update.

There is also MiSTer, an open source project to re-implement a wide range of vintage computer hardware on the Terasic DE10-Nano FPGA.


Stuff with a lot of simultaneous i/o that needs to be processed simultaneously, is one answer.

https://www.intel.com/content/www/us/en/healthcare-it/produc...


Lower-volume specialty chips for interfaces (lots of I/O pins), such as adapters for an odd interface, custom hardware designs for which there isn't an existing chip, etc.

For instance, audio, video or other signal processing can be done by putting the algorithm "directly" into the hardware design; it will run at a constant predictable speed thereafter.


RME realizes sub Firewire latencies over USB 2 with them in their audio interfaces, plus the ability to enable new functionalities via updates.


I think low latency is the main thing. In most cases, to get an FPGA that's faster in terms of compute than a GPU/CPU you're going to have to spend probably hundreds of thousands (which the military do, e.g. for radar and that sort of thing).

But even a very cheap FPGA will beat any CPU/GPU on latency.


In the past, I’d have tried to use Achronix’s FPGA’s for secure processors like Burroughs B5000, Sandia Secure Processor, SAFE architecture, or CHERI. One could also build I/O processors with advanced IOMMU’s, crypto, etc. Following trusted/untrusted pattern, I’d put as much non-security-critical processing as possible into x86 or ARM chips with secure cores handling what had to be secure.

High-risk operations could run the most critical stuff on these CPU’s. That would reduce the security effort from who knows how many person-years to basically spending more per unit and recompiling. Using lean, fast software would reduce the performance gap a bit.

CHERI is now shipping in ASIC’s, works with CPU’s that fit in affordable FPGA’s, and so this idea could happen again.


One particular use of FPGAs (and ASICs) is operating on bit-oriented rather than byte-oriented data. Certain kinds of compression and encryption algorithms can be implemented much more efficiently on custom chips. These are generally limited to niche applications, though, because the dominance of byte-oriented general-purpose CPUs and microcontrollers selected against such algorithms for more common applications.


Ultra-accurate classic computer and videogame emulators!


It's not so much about accuracy as the low-latency video output (and at the correct refresh rate).


Low latency video and correct refresh rate are part of why FPGA emulation is more accurate.


It can be of use in anything that handles a lot of data throughput but not built in large enough numbers to justify producing an ASIC. First example that comes to mind is an oscilloscope, but by definition FPGAs can be used anywhere (from retrogame consoles to radars).


Broadly speaking anything that does either a lot of reasonably specialized logic and medium-to-high performance broad work will have an FPGA in it (unless its made in very high volumes in which case it may be an ASIC, ditto for very high performance things).

Some FPGAs are absolutely tiny e.g. you might just use it as a fancy way of turning a stream of bits into something in parallel for a custom bit of hardware you have, other FPGAs are truly enormous and might be used for making a semi-custom CPU so you can do low latency signal processing or high frequency trading and so on.


I think we’re going to see a greatly increased use of FPGAs in AI applications. They can be very good at matrix multiplication. Think about an AI layer that tunes the FPGA based on incoming model requirements? Need an LPU like groq? Done. I would bet Apple Silicon gets some sort of FPGA in the neural engine.


But ASICs perform way faster and more efficiently. I doubt even the gain that you would get from "retuning" the FPGA would not increase enough compared to the benefit from a general purpose processor, GPU, or an ASIC


Until you need floating point performance.


They are useful for products which do video encoding, decoding, and microwave receive and transmit of video data. They are useful for TCP/IP insertion and extraction of packet data, e.g., in video streams.


Some older video game consoles have been "emulated" in FPGA. You just map out the circuitry of a device and voila you get native performance without the bugs of a software implementation.


Deterministic latency so you know the upper bound and lower bound


Given the importance of Storage and Networking when working with big data for LLM. Having storage and network code in FPGA might be useful..


Military Radar and Sonar


SDR


Postgres is designed to recover on OOM in most cases.


That's nice, how well does it work in practice?


It works. Bugs have been found, and some more bugs probably exist, but I think it meets a fairly high bar of quality.


I happen to be reading Sapiens right now. The author seems convinced that we genetically developed the ability for abstract thought (which he calls imaginary or fiction), and that allowed us to cooperate much more easily in large numbers. For instance, believing in a particular flag allows you to identify your allies on the battlefield without having ever met them before. The theory goes that other animals, including other hominins, genetically lack the hardware to rally around a flag.

This also led to the notion of a shared culture across many individuals that can adapt much more quickly than genetic evolution.

(Unless I misunderstand, of course.)

The author squarely blames Homo Sapiens for wiping out the other hominins, pointing out how many other large species seemed to go extinct as soon as we arrived someplace new, even long before the Agricultural revolution.


Certainly our capacity for abstract thought is our superpower, but recall, we tried to spread to Europe for 200,000 years, and the Neanderthals always were able to push us back out again.

50,000 years ago, something changed . Maybe it was an increased capacity for abstract thought, but if so, that was a software and not a hardware development, because we had the same anatomy for 300,000 years. If we could implement the software, I don’t see why the Neanderthals couldn’t have done so, as they had even bigger brains than we do.


Part of the theory is that abstract thought enables culture, and culture enables much faster adaptation than genetic evolution. But still far from instant.

So maybe the abstract thought hardware was there for a long time before we got culturally coordinated enough to use it with full effectiveness to organize large groups capable of outcompeting Neanderthals.


It looks like Ada 2022 has a way to track at compile time whether functions block or not. Seems cool for async programming.

http://www.ada-auth.org/standards/22over/html/Ov22-2-2.html


I don't think any compiler implements that yet.


Do you believe it was designed and standardized as a reasonably implementable feature?


I'm far from an expert on compiler development, but I think it's just a matter of walking the AST to ensure that there aren't any calls to procedures with the Global or Nonblocking aspects inside a parallel block.

The bigger issue is that the `parallel do` and `parallel for` blocks added in Ada 2022 [1] haven't been implemented and as far as I know, nobody's working on it.

I suspect that if we ever do get parallel support, it'll come from the GNAT-LLVM project [2], rather than GNAT-GCC. In the meantime, there's a CUDA compiler [3].

[1] http://www.ada-auth.org/standards/22over/Ada2022-Overview.pd...

[2] https://github.com/AdaCore/gnat-llvm

[3] https://github.com/AdaCore/cuda


Only recently have compilers started catching up to Ada 2012 and SPARK, outside Ada Core, thus this will take a while.


The article addresses this: "However - and this is the big, enormous caveat of this section - this would rely on the compiler effectively implementing some kind of unbreakable firewall, to prevent a type that does not implement Leak from ever getting used with pre-2024 code."


What I mean is: instead of having a “firewall” in the compiler, is there a way to interpret pre-2024 code such that it has correct but conservative Leak bounds? Then there would be more confidence that, if a mixed-edition program type-checks, then it’s correct.


It took me a minute, but the firewall is talking about the reverse direction of preventing post-2024 code without Leak being used in pre-2024 code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: