Hacker News new | past | comments | ask | show | jobs | submit | psibi's comments login

Looks like the author has also written magit integration for it: https://github.com/dandavison/magit-delta

Any user feedback on how it is (perf etc) ?


This is an interesting approach. These days I have completely switched to just[0] tool for similar use case.

[0]: https://github.com/casey/just


Related discussion in the bcachefs subreddit: https://www.reddit.com/r/bcachefs/comments/1f4erbg/debiansta...

Kent's reply in that thread has more details.


I was going through Opencost documentation which this project uses and it looks you need to setup AWS Athena if you want the cloud cost to be displayed for AWS: https://www.opencost.io/docs/configuration/aws#aws-cloud-cos...

Does the Athena does the actual processing/computation of costs ? What is the usual cost for running Athena ?

It also seems strange that I have to put the IAM keys into secrets instead of using IAM role for service account for configuring it.


The Cost and Usage Report (CUR) from AWS is just a fine-grained listing of all the resources in your account and their cost. It can be dumped out on different schedules (hourly, daily, monthly) and in different formats (CSV, Parquet).

It is pretty common to configure the CUR files to be dumped into your S3 account and query them via Athena. Athena is billed as $ per TB scanned ($5 last time I looked), so the cost will be based on how often the data is being queried. Downside is that each query can take quite a while to execute depending on data size.

The other common option is to ingest the CUR data into Redshift which gives you better control / options for performance, manipulation, etc. but requires that you set up and manage Redshift.

Hard to tell exactly what the Athena cost here would be as it depends on the number of assets in the account and the frequency in which you are querying the CUR. However, you can issue quite a bit of Athena queries on CUR data for most AWS use cases without incurring too much cost. Unless you have a rapidly changing environment (e.g. hundreds of k of assets turning over daily) or just tons of standing assets, you should be safe to assume hundreds a day at the most? Probably much less for most use cases. This is assuming they are querying once and storing rather than real time querying all the time and normal usage patters, etc.


Is OpenTofu also planning to maintain the language server (terraform-ls) or is that not within the scope ? I was not able to find any language server related repository.


The language server is still MPL licensed. Not sure if that's slated for a change but I'm sure they can always fork if/when that happens.


This looks nice!

As someone interested in developing a client for it, I'm interested in couple of things: what are the features supported by it currently, the tweak-able configuration that can be passed to it and the various code action available. I like the way nil language server has documented it (https://github.com/oxalica/nil/tree/main/docs). Is there something equivalent available for this ?


it's very much still a "POC" to verify that the libpg_query approach works - perhaps I should have made that more clear from the description. We have a PR open which adds source code generation to the Rust crate which is close to merging.

your comment is useful and why we submitted this to ShowHN - it's easier to get these sort of early feature requests than in 3 months once we've implemented more functionality. I'll drop a note in GitHub Discussions to investigate the Nil approach.


For the last couple of years, I have been using a combination of Clocking commands [0] in Emacs along with org-journal. It has been working well enough for me. Although, I had to send some patches to org-journal initially to suit my workflow.

[0] https://orgmode.org/manual/Clocking-commands.html

[1] https://github.com/bastibe/org-journal


For terraform and terragrunt, I have been using tfswitch and tgswitch respectively.

Also tfswitch has the ability to parse the required_version constraint in the terraform file and switch to the appropriate version based on that which I find quite handy.


I don't think just is trying to be a build system. It's major focus is as a task runner and in that space it does it's job well IMO.


As a task runner, why is it better than a bash script? Being able to run tasks in parallel is like the most fundamental feature I would expect from a task runner. Being able to skip over tasks that don't need to be done is a close second.


Because I don't want to have to read and figure out each person's bash idiosyncrasies, bugs, etc. in pile of undocumented scripts to add a new task that runs a new test case. Just gives you a framework to put all your tasks in one file, document them, and find and run them easily.

If bash works for you, stick with it. It does not work with large teams and people who aren't deeply experienced in all the footguns of bash in my experience.


The recommendation is now to use Cairo + HarfBuzz instead. There are more details here: https://github.com/emacs-mirror/emacs/blob/master/etc/NEWS.2...


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: