If you don’t want your image to look like it’s been marinated in nicotine, throw stuff like “neutral white background, daylight balanced lighting, no yellow tint” into your prompt. Otherwise, congrats on your free vintage urine filter.
They don't want you creating images that mimic either works of other artists to an extent that's likely to confuse viewers (or courts), or that mimic realistic photographs to an extent that allows people to generate low-effort fake news. So they impose an intentionally-crappy orange-cyan palette on everything the model generates.
Peak quality in terms of realistic color rendering was probably the initial release of DALL-E 3. Once they saw what was going to happen, they fixed that bug fast.
SDXL and FLUX models with LoRAs can and do vastly outperform at tons of things singular big models can't or won't do now. Various subreddits and civitAI blogs describe comfyui workflows and details on how to maximize LoRA effectiveness and are probably all you need for a guided tour of that space.
This is not my special interest though but the DIY space is much more interesting than the SaaS offerings; this is something about generative AI more generally that also holds, the DIY scene is going to be more interesting.
OpenAI's new image generation model is autoregressive, while DALL-E was diffusion. The yellowish tone is an artefact of their autoregressive pipeline, if I recall correctly.
Could be. My point is that if the pipeline itself didn't impart an unmistakable character to the generated images, OpenAI would feel compelled to make it do so on purpose.
Most DALL-E 3 images have a orange-blue cast, which is absolutely not an unintended artifact. You'd literally have to be blind to miss it, or at least color-blind. That wasn't true at first -- check the original paper, and try the same prompts! It was something they started doing not long after release, and it's hardly a stretch to imagine why.
They will be doing the same thing for the same reasons today, assuming it doesn't just happen as a side effect.
Genuine question :
When companies are bailed out by the taxpayer, why can't we then give ownership of the company to the taxpayer? Effectively 'buying' the company to save it, instead of just gifting it money for it to survive.
Is there a reason not to make the taxpayer (or government) the main shareholder after the bailout?
This has precedent in the US, like when the government nationalized failing freight railroads and merged them into Conrail. But after the more recent bank and auto bailouts I wouldn't expect to see this happen again. The shareholders would really prefer to have money thrown at them but also keep their stake.
> But after the more recent bank and auto bailouts I wouldn't expect to see this happen again. The shareholders would really prefer to have money thrown at them but also keep their stake.
The auto bailouts did not feature shareholders having money thrown at them and keeping their stakes (GM and Chrysler shareholders, for instance, were almost completely wiped out in the bailout, with the new GM owned by the UAW and the US and Canadian governments; the new Chrysler was majority owned by Fiat with minority stakes held by an autoworkers pension fund and the US and Canadian governments.)
Bank bailouts were more protective of shareholders because they were mostly government purchase of distressed assets or extensions of credit,
Not really, Intel had funding from Biden's bill, and Trump told them that in order to have that money they had to give a stake to the government. In this case Intel isn't being bailed out, just securing funding for new chip foundaries
> Genuine question : When companies are bailed out by the taxpayer, why can't we then give ownership of the company to the taxpayer?
tHaT'S sOcIAlIsM
Though ironically for the first time ever, the people shouting that would actually be correct. Kind of.
If you mean in a broad "is this possible" sense though, sure, absolutely. Entities owned in part or in whole by the state are not uncommon, but anytime such things are proposed in the US, the right loses it's fucking mind.
Edit: hit the comment rate wall.
> I see! But still, I don't get in what sense it is more socialist that just having people actually buy the company to save it (instead of just saving it 'for free'). If anything it makes it more capitalist if the taxpayers invest in the bailout, instead of just giving it away!
Because socialism isn't an economic system in American politics, it's a scary word that the Russians and CHYNA are. It's also completely interoperable with communism because our conservative party here has long since abandoned anything resembling reality, and even when they were here with us, they didn't know the difference between the two.
Doing it this way is capitalist because it's American. Doing it the other way is evil because it's socialist/communist, like the Russians/Chinese/North Koreans do with the lot of this rhetoric absolutely drowning in racism and nationalism. Mind you, all those countries have issues, absolutely. I'm just saying a conservative with a gun to their head couldn't actually explain those issues, they're just evil because they're not American. [ insert eagle screech here ]
Honestly the best distillation is: It's Freedom when private citizens run things, and it's Communism when the government does. The fact that the government sometimes has to give rich private citizens a shit ton of money to keep things afloat is not reflected upon.
If you try and analyze it through a lens of what these words actually mean, yeah it makes no goddamn sense at all.
But not really right? It could happen in the market, company A chooses to bailout company B by buying it and investing money to keep it afloat.
Except company A in this case is the government. No? Why is it that when it is the government doing this action, it has to gift the money instead of potentially profiting from it?
Edit : just saw the edit. I see! But still, I don't get in what sense it is more socialist, instead of just saving the companing 'for free', people actually buy (forcefully invest?) in the company to save it. If anything it makes it more capitalist if the taxpayers invest in the bailout, instead of just giving it away!
Its pure propaganda playing on American fear of socialism.
In other very capitalist economies governments did take stakes in banks in return for bailouts. The first British bank that needed one (Northern Rock) was entirely taken over by the government and shareholders just lost their money. The government bought stakes in others. It was still criticised as being too generous to shareholders and management.
I feel as though you're ignoring the most important part of that sentence. I assume you meant to write;
I believe that AGI will be a net benefit to whomever controls it.
I would argue that if a profit driven company rents something valuable out to others, you should expect it would benefit them just as much if not more, than those paying for that privilege. Rented things may be useful, but they certainly are not a net benefit to the system as a whole.
No, I believe AGI will have a net benefit for all of humanity. The telephone system was a net benefit for all Americans even though for a time AT& T (Ma Bell) controlled it.
AGI is fantasy at this point and your assumption that AGI would give OpenAI unprecented powers is the Musk/Yudkowsky/Hinton argument that AI will dominate and enslave us.
Drop those assumptions and my point stands that throughout history, monopolistically-controlled transformative technologies (telephones, electricity, vaccines, railroads) have still delivered net benefits to society, even if imperfectly distributed. This is just historical fact.
> AGI is fantasy at this point and your assumption that AGI would give OpenAI unprecented powers is the Musk/Yudkowsky/Hinton argument that AI will dominate and enslave us.
Yeah, like I said, room for improvement. I find the argument that AGI or sAGI should be feared, or is likely to turn "evil" absurd in the best case. So your arguing against a strawman I already find stupid.
Telephones, increased the speed of information transfer, it couldn't produce on it's own. Electricity allowed transmission of energy from one place to another, and doesn't produce inherent value in isolation, vaccines are in an entirely different class of advancement, (so I have to idea how you mean to apply it to the expected benefits of AGI? I assume you believe AGI will have something to do with reducing disability), railroads again, like energy or telephones, involved moving something of value from one place to another.
AGI is supposed to produce a potentially limitless amount of inherent value on its own, right? It will do more than just move around components of value, but more like a diamond mine, it will output something valuable as a commodity. Something that can easily be controlled... oh but it's also not concrete, you can never have your own, it's only available for rental, and you have to agree to the ToS. That sounds just like all previous inventions, right?
You're welcome to cite any historical facts you like, but when you're unwilling or unable to draw concrete parallels, or form convincing conclusions yourself, and hand wave, well most impressivive inventions in the past were good so I feel AGI will be cool too!
Also, the critical difference (ignoring the environmental differences between then and now) between the inventions you cited, and AGI, is the difficulty in replicating any technology. Other than "it happened before to most technologies" is there reason I should believe that AGI would be easy to replicate for any company that wants to compete against the people actively working to increase the size of their moat? copper wire, and train tracks are easy to install. Do you expect AGI will be easy for everyone to train?
oh, sorry dude... I wasn't expecting the indirect insult to be the only thing you read... my intent was less for you to take offense, and more to point out how you're arguing against something I never said and don't believe. I would have been interested in the reasoning behind the claim, and the parallels you saw, but was unwilling to tolerate the strawman.
Thanks. I'm sorry I jumped to the conclusion that you were making the doomer arguement. I see now your argument is much more subtle and raises some interesting points. If I understand it correctly, it's like what if one company owned the internet? But worse than that, what if one company owned access to intelligence? I'm old so I remember when AT&T owned the American phone system. We couldn't hook up anything to the phone jack without permission, so intuitivly I did understand your argument, but my opposition to doomer arguments (pause research! regulate!) got in the way.
You only exist because you were forced to be birthed externally? Everything has a beginning.
In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.
A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.
Yeah, an empathetic person would understand how and why the person is feeling the way they are and acknowledge it. There can be of course legal repercussions to going off on your employer, but even a "yeah, that's now how I would have done it myself", etc can show that you actually do care.
If somebody you know was dumped recently and is saying negative things about their ex, it's perfectly fine to "agree" or commiserate while they process and go through the stages of grief (ignoring any issues like their ex being family, etc).
The author reads to me like one of those perennial "think positive thoughts only" people that think that'll get them success.
I see, it was probably my high learning rate that caused problems. To be honest, I got a bit lazy to retry full finetuning since LoRA worked so well, but maybe I'll revisit this in the future, maybe with Qwen Image.
Perhaps what you were dealing with was actually exploding gradients using fp16 training which _are_ prone to corrupting a model and this can depend on the learning rate.
If you can screen tokens against your grammar fast enough, you can build a bitmask over the entire token vocabulary and apply it right before sampling. As vocabulary sizes grow, this gets more complex to do in real time, but we (and other libraries) have found several optimizations to do this extremely quickly (eg for guidance, we detail some optimizations here https://github.com/guidance-ai/llguidance/blob/main/docs/opt...).
Other libraries work by essentially pre-computing all the masks for all possible generations, but of course you're restricted to working with simple grammars in this case (like a subset of regular expressions)
It's not expensive per-se; A single element-wise multiplication of the output vector.
The real "expense" is that you need to prepare masks for every element of your grammar as they are expensive to recompute as needed; LLM tokens do not cleanly map onto elements of your grammar. (Consider JSON: LLM tokens often combine various special characters such as curly braces, colons, and quotes.)
This isn't that hard to compute, it's just more work to implement.
Good question—some frameworks do apply the mask immediately, others defer for performance or implementation simplicity. Mask precomputation can get tricky with large vocabularies, especially if grammar elements span multiple tokens. Immediate masking is usually preferred, but optimizations kick in when you're juggling complicated grammars or working against throughput bottlenecks.
Hey! I'm the author of the post. We haven't optimized sampling yet so it's running linearly on the CPU. A lot of SOTA work either does this while the model is running the forward pass or does the masking on the GPU.
The greedy accept is so that the mask doesn't need to be computed. Planning to make this more efficient from either ends.
There are studies that show how heritable intelligence is. Very.
It is quite common people say, that something is only a correlation and not causation. But if you can point to a common denominator, that has been shown multiple times, to have a massive effect, it's not likely to be just a coincidence. Genes are this common denominator. Society and habits (for example, protestants vs catholics) are another.
Things that consistently impact the whole population are not just a random process that picks: "You will be clever, ugly.", "You will be pretty, sporty, but will be dumb." It's always genes, society and habits.
Some research on the subject here : https://www.bbc.co.uk/future/article/20180905-how-genes-infl.... Personally I question how one can definitely prove to what extent genes are the factor as there are so many other factors in play, but those researchers know more than I.
reply