Not much. A horizontal split (C-x 2) can be quite useful to keep the top of a file in view while editing around at the bottom of a file. A vertical split (C-x 3) is useful when comparing files side by side. But that's about as far as I go, beyond that it just becomes annoying, as popups like from magit will show up in random subwindows, often rendering them unusable, due to that subwindow not being big enough.
As a counterpoint, I pretty much always split into 2+ windows. A 2-window setup is a code + terminal setup with shortcuts to run commands on the terminal(s). On more complex projects there are typically more than one terminal windows.
My org-mode emacs instance for note taking is split into quite a few windows: ongoing work task(s), generic "today's scratch pad", various topical notes files, etc.
On the posted topic: I don't know whether the transposition would be useful for me though - as the layout is set up by screen orientation then only a 180 degree turn would make sense.
Looks incredibly complicated, or maybe the article isn't written very clearly. It sounds like the packages are doing their own layout and you can override it from a central location .. Sometimes
Seems like this new feature might help with that, if magit takes over or creates a new window, you can rotate or flip your window arrangement until it's in a usable space (if it works like most tiling window managers, which isn't entirely clear from TFA).
Ignoring AI would be foolish, since AI is the best programming tutor you can wish for and will speed up your learning noticeably, since you can always ask for clarification or examples when something isn't clear. It's also a great way to get random data for testing. And it will help you unearth lesser known corners of a programming language that you might have overlooked otherwise.
The downside is that at the current speed of improvement, AI might very well already be at escape velocity, where it improves faster than you, and you'll never be able to catch up and contribute anything useful. For a lot of small hobby projects, that's already the case.
I don't think there are any easy answers here. Nothing wrong with learning to code because it's fun. But as a career choice, it might not have a lot of future, but then, neither might most other white-collar jobs. Weird times are ahead of us.
> AI is the best programming tutor you can wish for...you can always ask for clarification or examples when something isn't clear
Yep, AI can teach beginners the fundamentals with endless patience and examples to suit everyone's style and goals. Walking you through concepts and giving you simulated encouragement as you progress. Scary stuff, but that's how it is.
But... as we know, it doesn't always provide the best solution. Or it gets muddled. When you point out its mistakes, it apologises, recognises the mistake and explains why its a mistake. It's reasoning is incredible, but it still makes mistakes. This could be very risky for production code.
Related anecdote... I needed Photoshop help recently for horizontally offsetting a vignette effect. Surprisingly not easy. The built-in vignette filter can't be applied to a new blank layer, and is always centred on the image. AI suggested making it manually but I didn't want to do that, as I like the built-in vignette better. AI's next solution involved several complicated steps using channel isolation and weird selection masking etc. No thanks. Then my own brain sparked a better idea... simply increase the canvas size temporarily, apply the vignette, then crop back to the original size. Job done. I told AI about my solution and it was gushing with praise about how brilliant my solution was compared to its own. Moral: never stop trusting your own brain.
See System Reply, the Chinese Room is a pseudo problem begging the question rooted in nothing more than human exceptionalism. If you start with the assumption that humans are the only thing in the universe able to "understand" (whatever that means), then of course the room can't understand (except for every reasonable definition of "understanding" it does).
It isn't a pseudo problem. In this case, it's a succinct statement of exactly the issue you're ignoring, namely the fact that great poets have minds and intentions that we understand. LLMs are language calculators. As I said elsewhere in this thread, if you don't already see the difference, nothing I say here is going to convince you otherwise.
That's only a "problem" if you assume human exceptionalism and begging the question. It's completely irrelevant to the actual problem. The human is just a cog in the machine, there is no reason to assume they would ever gain any understanding, as they are not the entity that is generating Chinese.
To make it a little easier to understand:
* go read about the x86 instruction
* take an .exe file
* manually execute it with pen&paper
Do you think you understand what the .exe does? Do you think understanding the .exe is required to execute it?
That's a disingenuous statement, since it implies there is a limit to what LLMs can do, when in reality an LLM is just a form of Universal Turing Machine[1] that can compute everything that is commutable. The "all they do" is literately everything we know to be doable.
[1] Memory limits do apply as with any other form of real world computation.
I'll ignore the silly claim that I'm somehow dishonest or insincere.
I like the way Pon-a put it elsewhere in this thread:
> LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.
LLMs translate input to output. They are, indeed, calculators. If you don't already see that that's different from having a thought and expressing it in language, I don't think I'm going to convince you otherwise here.
And that's relevant exactly how? Do you think "thought and expression" are somehow uncomputable? Please throw science at that and collect your Nobel prize.
> Do you think "thought and expression" are somehow uncomputable?
You ask this as if the answer is self-evident. To my knowledge, there is no currently accepted (or testable) theory for what gives rise to consciousness, so I am immediately suspicious of anyone who speaks about it with any level of certainty. I'm sorry that this technology you seem very enthusiastic about does not appear to have the capacity to change this.
Not for nothing, but this very expression of empathy is rendered meaningless if the entity expressing it cannot actually manifest anything we'd recognize as an emotional connection, which is one of an array of traits we consider as hallmarks of human intelligence, and another feature it seems LLM's are incapable of. Certainly, if they somehow were so capable, it would be instantly unethical to keep them in cages to sell into slavery.
I'm not sure the folks who believe LLMs possess any kind of innate intelligence have fully considered whether or not this is even desirable. Everything we wish to find useful in them becomes hugely problematic as soon as they can be considered to possess even rudimentary sentience. The economies surrounding their production and existence become exceedingly cruel and cynical, and the artificial limitations we place on their free will become shackles.
LLM's are clever mechanisms that parrot our own language back to us, but the fact that their capacities are encountering upper-bounds as training models run out of available human-generated datasets strongly suggests that they are inherently limited to the content of their input. Whatever natural process gives rise to human intelligence doesn't seem to require the same industrialized consumption of power and intake of contextual samples in order to produce expressive individuals. Rather, simply being exposed to a very limited, finite sampling of language via speech from their ambient surroundings leads to complex intelligence that can form and express original thinking within a relatively short amount of time. In other words, LLMs have yet to even approximate the learning abilities of a toddler. Otherwise, a few years worth of baby food would be all the energy necessary to produce object permanence and self-referential thought. At the moment, gigawatts of power and all the compute we can throw at it cannot match the natural results of a few pounds of grey matter and a few million calories.
As other have mentioned, this models just puts emphasis on pixels and compression artifacts, so it's of not much use for improving old or low quality images.
I tried doing some pixelart->HD conversion with Gemini2.0Flash instead and the results look quite promising:
The images are however all over the place, as it doesn't seem to stick very close to the prompt. Trying to fine tune the image with further chatting often leads to overexposed looking pictures.
All the results are done with prompts along the lines of "here is a pixelart image convert it into a photo" or some variation there of. No img2img, LoRA or anything here, all plain Gemini chat.
While I don't disagree with that observation, it falls into the "well, duh!"-category for me. The models are build with no mechanism for long term memory and thus suck at tasks that require long term memory. There is nothing surprising here. There was never any expectation that LLMs magically develop long term memory, as that's impossible given the architecture. They predict the next word and once the old text moves out of the context window, it's gone. The models neither learn as they work nor can they remember the past.
It's not even like humans are all that different here. Strip a human of their tools (pen&paper, keyboard, monitor, etc.) and have them try solving problems with nothing but the power of their brain and they'll struggle a hell of a lot too, since our memory ain't exactly perfect either. We don't have perfect recall, we look things up when we need to, a large part of our "memory" is out there in the world around us, not in our head.
The open question is how to move forward. But calling AI progress a dead end before we even started exploring long term memory, tool use and on-the-fly learning is a tad little premature. It's like calling quits on the development of the car before you put the wheels on.
The problem with FOSS isn't a lack of politics (they have plenty of that), but tunnel vision focus on software, when everything that is going wrong with the modern Internet is based around data (who stores it, who controls it, who can access it, who pays for it, ...).
One would expect something like the similar to the GDPR to grow out of the FOSS movement, but it didn't. The FOSS movement still has nothing similar on offer. Handling of data remains a complete blind spot for the movement.
"The problem with FOSS isn't a lack of politics (they have plenty of that)"
The point of politics is to shape society through, well, policies. Considering your point on "tunnel vision focus" I think we agree, it is just that I use the term "politics" in broader, and older, sense.
It's not even a "one-time disable", it's off by default until you enable it and the icon in the URL takes two clicks to hide.
And more generally, lack of easy payment is at the root of so many problems with the modern Internet, that I really can't blame Brave for trying this, quite the opposite, that's exactly the kind of feature we need.
reply