In my every day experience that's pretty risky. The periphery as you call it is often an area where you lack the expertise to spot and correct AI mistakes.
I am thinking about build systems and shell scripts. I see people everyday going to AI before even looking at the docs and invariably failing with non-existent command line options, or worst options that break things in very subtle ways.
Same people that when you tell them why don't you read the f-ing man page they go to google to look it up instead of opening a terminal.
Same people that push through an unknown problem by trial and error instead of reading the docs first. But now they have this dumb counselor that steers them in the wrong direction most of the time and the whole process is even more error prone.
You're wrong. I have all the expertise but none of the time to generate 100s of lines of boilerplate API calls to get the data together, and no interest in formatting it correctly for consumption, let alone doing so state fully to allow interaction. These are trivial problems to solve that are highly tedious and do not affect whatsoever the business at hand. Perfect drudgery for automation, and just scanning the result is easy to verify the output or code.
> . I have all the expertise but none of the time to generate 100s of lines of boilerplate API calls to get the data together, and no interest in formatting it correctly for consumption,
Previously, PiHole used /etc/dnsmasq.d/ with best practice being to put one's own additional config, or overrides, in separate file(s) in that folder.
PiHole v6 appears to have most of that config built-in, and upgrading to v6 removes all of the previous standard config files, leaving only user-created / user-edited files in /etc/dnsmasq.d/ - and PiHole v6 by default no longer imports anything from this folder (to prevent possible incompatibilities).
But it's just a setting, and toggling it brings back the original functionality of importing config from files in that folder. And for me, my custom dnsmasq config worked just the same as it previously did.
Self-esteem that easily turns into hubris though. I think the real seniority shows when you are able to work on a legacy codebase full of the shittiest code and not have the slightest desire to rewrite it all.
No experience with slack but channels in teams are pretty terrible.
Notifications off by default so people create new channels with you as a member, write extremely important information inside them and you find out weeks later.
Each post is like an announcement so nobody uses them for the everyday trivial stuff you need a channel for. For casual technical discussion, asking for help. Whenever you post in a channel, assuming people enabled notifications and your post will be actually seen, everyone is compelled to answer as the UI screams for attention.
And don't get me started about it hiding part of a post by default so you answer thinking you read everything but you missed an essential part because post was 4 rows and 2 were hidden.
you don't have the time because you spend it bruteforcing solutions by trial and error instead of reading the manual and doing them right the first time
Please acknowledge that your situation is pretty unique. Just take a look at the comments: how many people say, or outright presume, that their company's code is already on GitHub? I'd wager that your org doesn't keep code at a 3rd party provider, right? Then, you're in a minority.
I don't mean to dismiss your concerns - in your situation, they are probably warranted - I just wanted to say that they are unique and not necessarily shared by people who don't share your circumstances.
This subthread started with someone from a no AI policy company, people are dismissing it with snarky comments, along the line of your code is not as important as you believe. I'm just trying to show a different picture, we work in a pretty vast field and people commenting here don't necessarily represent a valid sample.
> people are dismissing it with snarky comments, along the line of your code is not as important as you believe.
That says more about those people than about your/OP's code :)
Personally, I had a few collisions with regulation and compliance over the years, so I can appreciate the completely different mindset you need when working with them. On the other hand, at my current position, not only do we have everything on Github, but there were also instances where I was tasked with mirroring everything to bitbucket! (For code escrow... i.e., if we go out of business, our customer will get access to the mirrored code.)
> people commenting here don't necessarily represent a valid sample.
Right. I should have said that you're in the minority here. I'm not sure what's the ratio of dumb CRUD apps to "serious business" kind of development in the wild. I know there are whole programming subfields where your kinds of concerns are typical. They might just be underrepresented here.
Yes I've had plenty of experiences with orgs that self host everything, I don't think it's a minority it's just a different cluster than the one most represented here.
Still I believe hosting is somewhat different, if anything because it's something established, known players, trusted practices. AI is new, contracts are still getting refined, players are still making their name, companies are moving fast and I doubt data protection is their priority.
I may be wrong but I think it's reasonable for IT departments to be at least prudent towards these frameworks. Search is ok, chat is okish, crawling whole projects for autocompletion I'd be more careful.
> Yes I've had plenty of experiences with orgs that self host everything, I don't think it's a minority it's just a different cluster than the one most represented here.
I've done 800+ tech diligence projects and have first hand knowledge of every single one's use of VCS. At least 95% of the codebases are stored on a cloud hosted VCS. It's absolutely a minority to host your own VCS.
First, I didn't dismiss their "no AI policy" nor did I use snarky comments. I was asking a legitimate question - which is - most orgs have their code stored on another server out of their control, so what's the legitimate business issue if your code gets leaked? I still haven't gotten an answer.
How is that related? we're talking of continuously sending proprietary code and related IP to a third party, seems a pretty valid concern to me.
I, for one, work every day with plenty of proprietary vendor code under very restrictive NDAs. I don't think they would be very happy knowing I let AIs crawl our whole code base and send it to remote language models just to have fancy autocompletion.
"Continuously sending proprietary code and related IP to a third party"
Isn't this... github?
Companies and people are doing this all day every day. LLM APIs are really no different. Only when you magic it up as "the AI is doing thinking" ... but in reality text -> tokens -> math -> tokens -> text. It's a transformation of numbers into other numbers.
The EULAs and ToS say they don't log or retain information from API requests. This is really no different than Google Drive, Atlassian Cloud, Github, and any number of online services that people store valuable IP and proprietary business and code in.
Do you read every single line of code of every single dependency you have ? I don't see how llms are more of a threat than a random compromised npm package or something from a OS package manager. Chances are you're already relying on tons and tons of "trust me bro" and "it's opensource bro don't worry, just read the code if you feel like it"