Hacker News new | past | comments | ask | show | jobs | submit | elasticventures's comments login

Trails is a great concept. However wrong monetization strategy.


Exactly, datafusion is implied batteries included apache bigdata ecosystem. Polars is chasing the Python Pandas crowd and uses python syntax, handy if you're already comfortable with ipython.


Can't you use DataFusion single node/without any Apache ecosystem stuff? They have a Python library and DataFusion is "just" a query engine. (If anything, I'd call Pandas the batteries included option...)

I think the difference is more that DataFusion is built as a library so you can plug it into the product you're building (e.g. Comet, which plugs it into Spark, or pg_lakehouse, which plugs it into Postgres). Polars could be used that way, but it's also a functional package you can pip install and use as a Pandas alternative right now.


"pg_analytics (formerly named pg_lakehouse) puts DuckDB inside Postgres" https://github.com/paradedb/pg_analytics



That's true. We have some more ideas for DataFusion in the works, though... Stay tuned!


I found a job and got hired.


RPI Pico is irrelevant in my view. Seriously checkout RISC-V equivalence (there are many, I like the ESP32-C3)

Linux on RISC-V is real, china is using & standardizing on it. Ditch arm, skip RPI, ignore STM-32, and scoff at TI, .. RISC-V is the future.

While you're at it - skip micropython, skip the OS Linux - use RUST and target RISC-V directly.

The cost of RISC-V is cheap and will only continue to plummet, nothing in the IOT space is going to compete with RISC-V except absurdly expensive niche proprietary solutions that were engineered before the blackhole-esque super-nova gravity hole that is RISC-V which is happening right now in the IOT space. I'm not even sure those will survive, ..

The RISC-V options are only going to continue to increase. I say this because China is using RISC-V for most of it's future everything, it's basically the national chip and there are literally tens of thousands of developers & engineers coming out of school into this ecosystem every single day.

Do not use micropython, avoid all these runtime debugging headaches. Yeah, it might take you a few months or even years to learn RUST, but the concurrency, the power saving, the deterministic behaviors - oh joy! The headaches you will save, the extra rest at night and reduced stress will increase your lifespan.

Qemu RISC-V for emulation + testing and you'll save yourself a ton of time only deploying and supporting code that works. Full simulated environments, you can test in parallel in the cloud, few platforms can do that! RISC-V + RUST it's a joy.

Fewer crashes in the field-- and let me tell ya! .. when you're doing anything IOT that is the only thing that matters - never having a crash in the field, that's better than chocolate cake.


A microcontroller is a means to an end. When I need a CPU for an FPGA design, I will always use RISC-V because there's plenty of easy to integrate open source CPUs available. Similarly, when I need a hard microcontroller chip or board, I use whatever suits me best. And then I frankly don't care whether it's RISC-V or ARM. Why should I? I use what's available and has the right features.

There's nothing irrelevant about an RPI Pico: it has more RAM than most, it has excellent documentation and example code, it has PIOs that are incredibly versatile, it's available everywhere, and it's dirt cheap.

For quick prototyping I use MicroPython, otherwise I use C. Why should I have to learn Rust for something simple?


Do you have any recommendations on how to learn about PIO? I’ve mostly been in the arduino world and got pretty comfortable with AVR features, now I’m writing micropython to control neopixels for a board called the Plasma2040 and instead of bitbanging the signal from a digital pin, it’s apparently generating the signal via PIO, but it’s all black magic to me.


This is a blog post of someone converting a conventional bit-banged method of driving a display to pure PIO. Maybe it helps getting an idea?

https://www.zephray.me/post/rpi_pico_driving_el/


It is, at once, both over my head and very encouraging that someone who hasn’t used PIO before was able to achieve this, thanks for the link, I think if I compare this code to what the Plasma library is doing I might be able to tease out what’s happening.


Did you have a look at chapter 3 of the Raspberry Pi Pico C SDK already by the way? It's about PIO programming.

https://www.raspberrypi.com/documentation/microcontrollers/c...


I did not notice they wrote a whole book on the SDK, this looks perfect, thanks again


The C SDK contains examples, the actual documentation is in the RP2040 docs. It's also chapter 3.

https://www.raspberrypi.com/documentation/microcontrollers/


Chapter 3 of the RP2040 datasheet is all about PIO, but it may not be the best wy to learn: https://datasheets.raspberrypi.com/rp2040/rp2040-datasheet.p...

I find the Pico SDK examples useful to learn, but it requires that you already have a good understanding of the instructions and the protocol that it implements. Here's the I2C PIO program, for example: https://github.com/raspberrypi/pico-examples/blob/master/pio....


Hey I managed to miss that raspberrypi repo since I jumped straight into my neopixel project, many thanks


I don't know how you can say it's irrelevant. Multiple standards can exist concurrently. Not even because they deserve to, but because people like them and are comfortable with them even if they are or aren't a bit crappy.

Also the RP2040 is a great chip and it's super available and it's super cheap. And it runs Rust.


Ditch arm, skip RPI, ignore STM-32, and scoff at TI, .. RISC-V is the future.

Can you point to any parts that are in full production and have a 5-7 year supply horizon?


Since I like to build custom electronics using JLCPCB, I went on there and looked for RISC-V chips. I found the ESP32-C3 and a dozen or so Chinese chips that seemed to only have Chinese datasheets. It seems that things are still pretty immature in RISC-V adoption. I will be getting a ESP32-C3 dev board to tinker with at least.


The projects I work on need long time horizons and industrial or automotive environmental specs. RISC-V will be hardware cosplay (h/t n-gate!) until that time, as far as I can see it.


I just built a keyboard based on the RP2040 because it's cheap, capable, obtainable, and you can also write rust for it. The RP2040 is interesting though with its PIO peripheral.

Do you know of any RISC-V chips which have something like that? Generally curious too about which RISC-V chips are as widely available as the RP2040


Thanks for the recommendation on ESP32-C3. I've been wanting to work with a cheap RISC-V chip. Unfortunately the lead time on mouser is 9 weeks and it only offers 22 GPIO.

In contrast the RP2040 is readily available, has 30 GPIO, and the Pico dev board is only $4. I'm not building an IoT device for my next project so I consider the die space dedicated to those features wasted.

I recognize this post is about IoT but I just wanted to say I don't think the RP2040 or Pico are irrelevant. The platform has it's benefits. In my case, relying more on the SOC and not having to increase BOM.


Things to look for:

1. How much automata & does the founding team 'care' about technical best practices which will ultimately determine the operational cost of the systems & organization/ type of people to hire. (Do I want to work with the type of people the company is going to need to hire)

2. Do the co-founder(s) understand their market, do they have Realistic & Achievable plan, including visual 'mock ups' of any key behaviors of features, can they articulate the vision to me or will I have discretion on how to implement. I stay away from dubious social science, psychology, anything that is described as "the next xyz"

3. What is the equity structure being offered & compensation relative to the market opportunity (i.e. "how likely are we going to be an exit"). What is the market size, do we have customer #1 (and #2 .. etc.) in mind, a sales strategy, avoid a field of dreams "build it and they will come" mentality.

Fwiw, I have a horrible track record finding co-founders. I prefer odd # of person startups. I don't ever do 50/50 anymore, its always 49/51 or 49.99 and 50.01 whatever it's never 50/50 by contract. One person is 'the decider' I always offer them the 51 but that balance might flip in my favor if they don't deliver on mutually agreed achievable KPI's, sort of like side-bets, and this equity percentage can move a lot at the early stages but it keeps everybody focused. If feelings and egos are going to be bruised I'd rather find out early, if they are going to be greedy and try and screw me later I'd rather not engage at all.


k8s is the best solution for _universally_ bringing legacy pre-container "legacy" application patterns into a cloud. K8s won the battle (vs mesos, openshift) for a number of a reasons - but one of those reasons was absolutely not simplicity, rather k8s was better at handling the edge cases. This is a popular opinion held by Victor Farsick @ DevOps Paradox podcast and I happen to agree with him.

You are absolutely correct that k8s is not necessary for cloud-native environments. Existing comapnies usually have legacy applications, so they can't go "cloud native".

Once a company has started down the k8s path the org (by necessity) will start to specialize a k8s control layer and that will become embedded and you risk career-icide & being labelled a heretic if you try to challenge the rationality of using k8s. K8s creates a nice way sysops to keep devs in little boxes and limit the size of the craters they can make. All everybody wants at the end of the day is not to be hassled and not need to keep learning something new each day.

Dev's are usually horrible system operators since they fundamentally want to use code to solve problems rather than look for solutions others have built. K8s in this respect creates a very non-creative line of demarcation for system responsibilities. Devs should not be writing backup & logging applications.

If you're building a startup and you don't want the k8s complexity what you are suggesting is fine - but if you're going to work in big b0rg org, it's better to get in the k8s bandwagon - what you are saying will get you labelled a heretic, because the k8s admins have enough struggle keeping up with k8s complexity & beating devs into submission, the idea of learning anything else is unpopular.

It's straightforward to hire people with k8s expertise, but now this is a domain of knowledge that humanity has cultivated.

I remain of the unpopular opinion that k8s is really only suitable for companies which are the size of google (imho), the idea of having a massively complex & specialized administration layer for most companies is absolutely stupid. However the idea of refactoring legacy applications to be cloud native will never fly, too many risks & unknowns, disruption, big b0rg orgs dont want none of that.

So k8s is also a safe choice, it's not especially creative, it's burdensome but big b0rg orgs don't really care about any of that - these folks are unfortuantely the decision makers, and they don't understand the topics they are making decisions on, so popularity of tech, groupthink etc. are going to win there.


> I remain of the unpopular opinion that k8s is really only suitable for companies which are the size of google (imho)

This is simply not true if you use managed K8s (DigitalOcean, Google GKE, Amazon EKS, Microsoft AKS, ...).

Kubernetes provide a unified API to handle almost every aspect of your containerized application (networking, load balancing, rollout deployments, storage management, service discovery, ...).

You might not need many of those, but you won't have to learn another tool to set it up once you do need it.

Buy a cheap managed k8s with a single node, helm install nginx ingress controller and cert manager (with letsencrypt). Make your app read config from an environment variable, put it in a Docker image, push it to either ghcr.io or a gitlab container registry. Write a Deployment, a Service and a ConfigMap, eventually an Ingress resource and voila. Your app is up and running in no time.

Repeat the last 3 steps for all your apps. It's that simple.

You can then work from there if needed:

  - setup a terraform to automatically setup your managed k8s and managed databases (if you need it)
  - setup a CI/CD pipeline to build/push your docker images (github actions, gitlab CI, jenkins, whatever)
  - add Prometheus monitoring and a /metrics endpoint in your app to scrap it
  - add HorizontalPodAutoscaling
  - store secrets in a Vault and inject them in your pods with k8s based auth to the vault using the pod's service account
  - add more nodes to your cluster
  - ...
None of those steps are required to start doing things, and all can be added without changes to your apps.

Yes, for small needs systemd+docker on an EC2 instance can get you far enough. But you'll need to rewrite everything when you need to scale. With Kubernetes, you won't need to rewrite anything.

If you want to self host and operate a k8s yourself, that's a complete other story, and I agree with you that you should not do that if you don't specifically need it.


That's a lot of stuff. Have you tried building using Lambda + ALB/API GW? You write your code and ship it. Everything from auth to metrics to certificate provisioning is shipped out of the box.

There's an actual overhead of maintaining that infrastructure and if you're a small company with limited devs it's worth really evaluating if this cost is worth it.


The initial setup (managed k8s + nginx + cert manager) takes 1h top.

Writing a Deployment/Service/ConfigMap/Ingress is about copy-pasting, `helm create your-chart` will even generate those for you, and you only have the values.yaml to fill out.

From experience, there is really no overhead to that simple setup. Our Github repositories each have a CI/CD workflow (github actions) to build and push the Docker images to our Dockerhub account. Creating a new repository requires copy/pasting (in fact, we automated this with a template repository).

We have a repository where all of our Kubernetes resources/helm charts/... live and are deployed automatically when merged to master/main (github actions again). This was setup once and requires no maintainance. We have no need for ArgoCD/FluxCD (aka: divergence reconciliation) at the moment, so this is enough.

Everything else I listed are extra that are not needed for small companies, but can be added later as you scale/grow.

My point is that when you scale/grow and start needing this extra complexity, the existing setup do not need to be changed.


Just came here to talk about OpenSCAD. http://openscad.org

it can also be used in blender, embedded into other applications. design cad as code, use git version control for your objects & designs. vs-code has plugins, or use the openscad

for many hackernews oriented persons this approach will be better than using a mouse.


OpenSCAD is only really suitable for highly parameterised or programmatic shapes. Stuff like fasteners, gears, belts, chains, art, etc.

Those things are really the exception in CAD. It would be masochistic to use OpenSCAD for the things CAD is more commonly used for (consumer products for example).


Finding a good assistant is rough, it requires patience, initiative & chemistry. Once you find 'the one' reward them, because a good assistant is like having a 3rd hemisphere to your own brain -- it sucks when you lose it.

Retooling for an assistant is the most important thing you will ever do for your company, it is what will ultimately allow you to perhaps grow or exit someday.

For new assistants - start them writing down your processes as an operations manual. Every day, they should be making updates and journal entries in the operations manual. The primary purpose of this is to see they can write in a way you can understand, if they can't do this - then fast fire.

If you find somebody who can write/edit/update -- Make sure they know "this role temporary, until it is permanent" and their role is to make themselves indispensable by anticipating your needs.

review your inbox, edit your documents, organize your calendar, and do research, follow up with clients, billing/collections, whatever tasks they can identify and offload from you.

A good assistant once fully trained will be 80% right, 20% wrong, .. you need to accept they aren't you - but eventually you'll both find how to make sure they do the 80% .. and defer/check with you for the other 80%.

A good assistant will write things down for your next assistant. We call this the "BUS" (or Tram) factor.

I know this seems cold & harsh, but having/updating an operations manual should be the first week of any new person as they are training.


while i concur this is a nice notion, it's a special type of rare investor who wants to do all the dilligence, take the risk, and not pursue a large reward.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: