I want to make a few points to help clarify some of the choices and why I made them. This is very helpful and I appreciate all the comments as it highlights how some things are clear in our head but we don't end up sharing that with anyone reading. So:
1. I looked at AdGuardHome but I preferred PiHole because I found its documentation a bit more helpful for my purpose (the Unbound sample, the Wireguard setup, etc)
2. I saw the docker compose package, but I wanted something that runs at the OS level. There are docker packages for Wireguard too and I had also a look at Mistborn (https://gitlab.com/cyber5k/mistborn)
3. The VPN is the main thing I wanted setup to reach resources on my home network, adblocking and DNS came a bit later, so you can run this without a VPN, but its central for my setup.
4. I really wanted this setup at the OS level and to hopefully learn more about the whole process.
> 1. I looked at AdGuardHome but I preferred PiHole because I found its documentation a bit more helpful for my purpose (the Unbound sample, the Wireguard setup, etc)
Probably the right call, but funnily enough, I had to go the other way. PiHole started using 100% of the CPU on my Raspberry Pi 1B after an update to version 6.x, which then obviously slowed the entire network to a crawl and made it unusable. Although later versions supposedly fixed that, whatever was the latest version at the time still had that problem for me, even on a completely fresh install.
AdGuardHome worked for me without any hassle, but I would never have even considered it, given I'd been happy with PiHole for 5+ years, if it hadn't been for the fact that whatever update PiHole did completely borked its usability.
I had wireguard on docker before for some containers, but it felt clunky and it over complicated the network stack in my head (I'm unfortunately not very skilled in networking in general). So I said that I'd go back to the root and run it at OS level because then I can expose Proxmox to the world or any of the other VMs I run by having them join the wireguard network. Which in turn means that I can connect to any machine I want/need directly. I am also playing around with writing my own dynamic DNS worker in C# and I was curious on how I could have that run as a systemd process but bypass the wireguard tunnel to keep updating IP addresses. A lot of these were tied to me just being a bit more curious about the whole stack.
I put Zorin OS on my dads old laptop 5 years ago and I think the only time I got a question was when someone setting up his new internet was digging through network settings but hadnt used any Linux distro before. Even then it was a 5 min call. Its a very Windows-like experience and I've noticed most parents really just write an email, browse the web and maybe consume media. All of those can occur in a browser.
I have looked at that briefly, I think I had gone with pihole in the end for the ability of having a UI to easily see any resolution issues and local dns management (which, I think, is also present in Unbound but not in a UI but via configs).
This is such a good point. I've changed jobs a lot and one thing consistently bad (to varying degrees) is documentation and tutorials specifically.
"Download X and setup"
"It's not working..."
"Oh yeah, you're supposed to do it on the remote access VM"
"It says access denied"
"Oh right, you're supposed to use the Yubikey for access"
"I don't have a Yubikey, its pass + authenticator"
"Ok, I'll email Jeff from this department you wont hear off until someone new starts. But otherwise keep following the tutorial and you should be good to go!"
It always infuriates me. At my last job I had a lot more control and authority, so I redid the entire tutorial for the proj we worked on. Every few months I'd check all my account permissions, update the list on the readme, spin up Windows/Ubuntu VMs and try to get the project running using ONLY the tutorial. Anything missing - add it.
If anyone added a new dependency the documentation would be updated and the steps checked on a new VM. I did this as we had various people come in and work for a few weeks, add a new feature and leave. The end result was that instead of 1-2 weeks to get running, people would have everything running within their first day and start work sooner. Instead of needing someone for 4 weeks for ONE feature, we could finish 2-3 and sprinkle in more tests and confidence.
I think most developers would benefit from writing for a less experienced audience, especially for this sort of thing.
I've always this attitude that the one after me should not face the same issues as I had and so I update all wrong documentation, this also helps me remember how stuff works. But I have had many occasions where I just had to give up. People thought I was being annoying, my PR for fixed Readme (or even a Readme at all at times!) were simply not picked up, etc, etc.
I left that company, and left a letter for management about the abysmal developer experience.
I prefer to think that updating the documentation isn’t fixing the root issue, and that the working systems are their own documentation if explained properly, so document in such a way that they can’t become outdated, when feasible.
This is really great. I've managed to convert a few people to talk over Signal and while I am backing up my chats to my home server (I see you will be offering something like this in the future), this wasn't really an option for the people I converted over to Signal, so they were constantly afraid that they might lose the pictures or the chats if something happened to their phone.
I know, you can download media and save it through something else, but most people just opt-in whatever is default. I think my only suggestion would be to make it real clear or even maybe have some sort of counter that says something like "39 images are no longer backed up" or "8374 media items are NOT being backed up, 507 are in backup, 29 will be removed tomorrow". This could be directly on the backup page, I'm not currently running the beta build as I installed the apk, but if it's already on there, scratch the feedback!
Thank you again for all your hard work on this, it really is appreciated (financially too!)
I've been reading these posts for the past few months and the comments too. I've tried Junie a bit and I've used ChatGPT in the past for some bash scripts (which, for the most part, did what they were supposed to do), but I can't seem to find the use case.
Using them for larger bits of code feels silly as I find subtle bugs or subtle issues in places, so I don't necessarily feel comfortable passing in more things. Also, large bits of code I work with are very business logic specific and well abstracted, so it's hard to try and get ALL that context into the agent.
I guess what I'm trying to ask here is what exactly do you use agents for? I've seen youtube videos but a good chunk of those are people getting a bunch of typescript generated and have some front-end or generate some cobbled together front end that has Stripe added in and everyone is celebrating as if this is some massive breakthrough.
So when people say "regular tasks" or "rote tasks" what do you mean? You can't be bothered to write a db access method/function using some DB access library? You are writing the same regex testing method for the 50th time? You keep running into the same problem and you're still writing the same bit of code over and over again? You can't write some basic sql queries?
Also not sure about others, but I really dislike having to do code reviews when I am unable to really gauge the skill of the dev I'm reviewing. If I know I have a junior with 1-2 years maybe, then I know to focus a lot on logic issues (people can end up cobbling toghether the previous simple bits of code) and if it's later down the road at 2-5 years then I know that I might focus on patterns or look to ensure that the code meets the standards, look for more discreet or hidden bugs. With an agent output it could oscilate wildly between those. It could be a solidly written search function, well optimized or it could be a nightmarish sql querry that's impossible to untangle.
Thoughts?
I do have to say I found it good when working on my own to get another set of "eyes" and ask things like "are there more efficient ways to do X" or "can you split this larger method into multiple ones" etc
So, as someone who has applied to a lot of jobs and has had a lot of jobs, I'm going to be a bit more critical here. I think the amount of information they gave is sufficient for a take-home.
Make an email client, email view+send, fake backend or real imap, handle plaintext.
At this stage, for a take-home, I'd start working and write down assumptions I made as I went along. I'm probably the opposite of the author, as a take-home (unless it's the last stage or something) is, in my view, a tester to see what a person can do within a few hours of work. I've had several take-home exercises during my time as a software engineer and they varied from "we have provided all details that a stakeholder would provide" to "if you have any further clarifying questions, please get back to us".
The most recent one I took a couple of years ago came with an internal library the company used. They said, "use this library to make a web app that takes advantage of these methods inside of it; create an app that simulates behavios using those methods. Do not spend more than 10 hours on the assignment."
I started coding, I threw something together that worked in about 6-7 hours and I was writing down assumptions as I worked as well as trade-offs from those assumptions. "I assume the user would not be bothered by a failure here, if reliability would be important, what would we want to do in the case of a failure? Retry? Back off retries? etc etc". I then provided the code and provided a list of improvements "Add unit tests to these 3 components, add integration test to ensure this functionality works end-to-end if its essential, improve UI, clean up code base, refactor these services, use a framework/library for the UI instead of hard-coded JS to make these few calls. I wrapped it up because I think I was 70-80% there."
During the next interview with the architect of the company, he said that the solution I provided worked fine, we discussed some things I did and he asked why I did them, but specifically, and this is why I'm commenting here, he mentioned that he appreciated the assumptions document, the future improvements and the "I stopped here because I'd rather get feedback on this and refine, as opposed to keep building something I imagine you want". He said that the ability to work up to a point where you hit the majority of the story and then get feedback on that incomplete project is better than having someone dissapear into a cave for a month to try and come back with the absolute finished product in their oppinion and then having to make changes to get it in line with what was actually wanted.
So I guess, become comfortable with uncertainty. If nothing else, ask only "hey, I assume you want something along these lines that I can bang out in 5-6-10-20-40 hours. If you're not happy with what you get for 5 hours, it might not be a good fit in general or if you think I prioritized the wrong thing in those 5 hours, then we can chat about that too". I am also saying this, because my current role is a lot more in line with what the author is looking for - they spend weeks refining requirements, they write documentation, they create mocks and they have meetings over meetings before I even know what I'm supposed to do and within a day or two they start realizing that what they described is about 10-20% off what they wanted or what is possible and the whole cycle of meetings starts again. Instead, and I've been pushing for this each sprint, I'm asking them to accept a certain level of unknown in a given story, we work on it, we see it behave and get used and refine it based off of that.
The author seems to want waterfall, but agile exists for a reason. Hell, let's not even call it agile or whatever else, in a real situation, you start doing and you learn as you move along. You refine based on feedback, you refine based on new experiences and based on new requirements. You work in the murky areas of someone elses mind. Or not, I don't know, but expecting a series of jira tickets with screenshots and deliverables from a company that just wants to see how you think, how you work with uncertainty and how you deal with unknowns feels... wrong.
Not sure how these are that "new" seeing as Toto (and I'm almost assuming other Japanese brands too) have had designs like the Nautilus for some years. One of the things that stuck with me a lot after a trip to Japan was exactly how thoughtful their toilet designs are. Public toilets with these tall urinals were amazingly clean in even the busiest stations and would allow you to get a good angle and not have splash on to the floor/shoes. Similar designs but scaled down were found in their newer Limited Express trains. Also, that angular design makes no sense, a human being will need to clean it and for anyone whos ever had to clean angular ceramics, they will know that that design will just be a pain to get proper clean...
I guess what I'm saying is, before we start researching new methods, why can't we be bothered to spend even a little bit of time to see what else is out there.
I've built a few small personal projects using Vim with NERDTree only in C#. I keep doing this every few months (small means the solution has around 4-5 projects and it performs a few clearly defined little functions) and it is really is both helpful and interesting to realize how many things we take for granted, but how great it feels to better understand dependencies, which nuget packages are needed, version compatibility issues and many other things.
I also end up better knowing and remembering any new classes and methods because I have to dig through the reference documentation for each of these things.
reply