I have ultra. Will not be renewing it. Useless, at least have global limits and let people decide how they want to use it. If I have tokens left, why can't I use it for code assist?
I don’t think we know enough about science to know whether teleporting a photon is something we can do or not, for example maybe we can via micro wormholes or quantum tunneling or some other frequency anomaly that cancels out strong/weak atomic forces so that an object can be accurately displaced (“teleported”) through other objects.
Once you figure out the “through other objects” part, I guess it just becomes an energy control problem, ie how to get object A to location B accurately, and decelerate it, before the effect wears off. Which is maybe not so hard when you have a teleport sender and receiver that can do the acceleration and deceleration.
Hypothetically the sender would estimate the trajectory required to hit the receiver then sync/teleport an inert beam of atoms (photos or something) with it. Then, once sync has been established you would know the trajectory settings to use, perhaps it would be a giga-energy problem to ie phase the object, accelerate it to light speed, then receive it at the destination and un-phase it. This would allow you to ie teleport living things without the morale dilemma of losing their original consciousness.
The practical distance would be based on the achievable speed ie how far can we shoot something before it phases back. You can cover a pretty big distance in 1us at the speed of light! Around 300m. If you can keep something phased for 10ms, you could go 3000km, at which point you just form a network of receivers.
Hypothetically my ancestors were cheetah and not hominoid apes. One needs to prove foundational hypothesis before larger claims that can be built on top of those.
GitHub is not the place to write code. IDE is the place. Along with pre CI checks, some tests, coverage etc. they should get some PM before making decisions..
As long as the resulting PR is less than 100 lines and the AI is a bit more self sufficient (like actually making sure tests pass before "pushing") it would be ok I think. I think this process is intended for fixing papercuts rather than building anything involved. It just isn't good enough yet.
As a matter of principle I don't use any network which is trained on non-consensual data ripped of its source and license information.
Other than that, I don't think this is bad tech, however, this brings another slippery slope. Today it's as you say:
> I think this process is intended for fixing papercuts rather than building anything involved. It just isn't good enough yet.
After sufficient T somebody will rephrase it as:
> I think this process is intended for writing small, personal utilities rather than building enterprise software. It just isn't good enough yet.
...and we will iterate from there.
So, it looks like I won't touch it for the foreseeable future. Maybe if the ethical problems with training material is solved (i.e. trained with data obtained with consensus and with correct licenses), I can use as alongside other analysis and testing tools I use, for a final pass.
AI will never be a core and irreplaceable part of my development workflow.
I feel there's a fundamental flaw in this mindset which I probably don't understand enough layers of to explain properly. Maybe it's my thinking here that is fundamentally flawed? Off the top of my head:
If we let intellectual property be a fundamental principle the line between idea (that can't be owned) and ip (that can be owned) will eventually devolve into a infinitely complex fractal that nobody can keep track of. Only lawyer AI's will eventually be able to tell the difference between idea and ip as the complexity of what we can encode become more complex. Why is weights not code when it clearly contain the ability to produce the code? Is a brain code? Are our experiences like code?
What is the fundamental reason that a person is allowed to train on ip but a bot is not? I suspect that this comes down to the same issue with the divide between ip and idea. But there might be some additional dimension to it. At some point we will need to see some AI as conscious entities and to me it makes little sense that there would be some magical discrete moment where an AI becomes conscious and gets rights to it's "own ideas".
Or maybe there's a simple explanation of the boundary between ip and idea that I have just missed? If not, I think intellectual property as a concept will not stand the test of time. Other principles will need to take its place if we want to maintain the fight for a good society. Until then IP law still has its place and should be followed but as an ethical principle it's certainly showing cracks.
It took longer than I planned, sorry. But here we go:
When you look at proper research, whether from academia or from private corporations, you can always keep track of ideas and intellectual property resulting from these ideas. Ideas are mature into documents, research reports, and proof of concepts. In some cases, you can find the process as Lab Notebooks. These notebooks are kept by respecting a protocol, and they’re more than a collection of ideas. It’s a “brain trail”. Then, you publish or patent these ideas. Ideally both. These artifacts (publications and patents) contain references and citations. As a result, you can track who did what and what came after what invention. In a patent case, you may even need to defend your patent to convince that it’s not the same invention that was patented before. In short, you have a trail. There are no blurry lines there.
The thing is, copyright law and the idea of intellectual property are created by humans for humans. First, I’ll ask this question: If an instructor or academic is not allowed to teach a course without providing references, whether to the book itself or the scientist who invented something, why is a bot allowed? Try citing a piece of a book/publication in a course or paper or research without giving a reference, and you’re officially a fraud, and your whole career is in shambles. Why a bot is allowed to do this, let it be a book or a piece of code? For the second perspective, I’ll ask a pair of questions: 1) How many of the books you have read can be recalled by you exactly, or as a form of distilled summary? 2) For how long can you retain this information without any corruption whatsoever? 3) How many books can you read, understand, summarize, and internalize in an hour? A bot can do thousands without any corruption and without any time limit. As a result, an LLM doesn’t learn; it ingests, stores, and remixes.
A human can’t do that to a book (or any artifact) if its license doesn’t allow it or its creator gives explicit consent. Why can a bot? An LLM is a large stochastic blender that tends to choose correct words due to its weighted graph. A human does it much differently. It reads, understands, and lets that idea cook by mixing with their own experience and other inputs (other people, emotions, experiences, and more) and creates something unique outside the graph. Yet this creation has its limits. No machine can create something more complex than itself. An LLM can never output something more complex than the knowledge encoded in its graph. It might light dark corners, but it can’t expand borders. The asymptotic limit is collective human intelligence, even if you give it tools.
So, yes, the IP law is showing its cracks because it’s designed for humans, not bots. However, I value ethics above everything else. My ethics is not defined by laws but by something much higher. As I replied to someone, “I don’t need to be threatened to be burned for all eternity to be good.” Similarly, I don’t need a law to deem something (un)ethical. If what’s done is against the spirit of humanity, then it’s off-limits for me.
I’d never take something without permission and milk it for my own benefit, esp. if the owner of that thing doesn’t consent. I bought all the software I pirated when I started to earn my own money, and I stopped using software that I couldn’t afford or didn’t want to buy. This is the standard I’m operating at, and I hold all the entities I interact with to that exact standard. Lower than this is unacceptable, so I don’t use LLMs and popular LLMs.
On the other hand, not all AI is the same, and there are other good things that I support, but they are scientific tools, not for consumers directly.
You're welcome. I have moved to Source Hut three years ago [0]. My page is https://sr.ht/~bayindirh/
You can also self-host a Forgejo instance on a €3/mo Hetzner instance (or a free Oracle Cloud server) if you want. I prefer Hetzner for their service quality and server performance.
I just use ssh on a homeserver for personal projects. Easy to set up a new repo with `ssh git@<machine> git init --bare <project>.git`. The I just use git@<machine>:<project>.git as the remote.
Your method works well, too. Since I license everything I develop under GPLv3, I keep them private until they mature, then I just flip a switch and make the project visible.
For some research I use a private Git server. However, even that code might get released as Free Software when it matures enough.
It is possible to test the chaining though, if you know your data well. If not, those edge cases in the data quality can throw things off balance very easily.
Amazon did that to a physical product my wife is selling. I was very annoyed. But it ended up being a configuration but default being `show address` state is definitely annoying.
you do you. NIR is the part of sunlight that makes you "feel" good or alive or whatever nice thing you are looking from being outside.. Ill stick with NIR is essential component for calling stream of photons as sun-like.. or not sun-like.
Well, I have a brute force strategy for pgvector working reasonably well. Individual, partial indexes. It works for all those queries with category_id=<> clauses. You only need an index for larger categories, for categories with rows below a threshold you dont need index a KNN/dot product would work.