1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I wish your parent comment didn't get downvoted, because this is an important conversation point.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.