> California’s job expansion has continued into its 51st month, with Governor Gavin Newsom announcing that the state created 21,100 new jobs in July. Fast food jobs also continued to rise, exceeding 750,000 jobs for the first time in California history.
> “Our steady, consistent job growth in recent months highlights the strength of California’s economy – still the 5th largest in the entire world. Just this year, the state has created 126,500 jobs – solid growth by any measure.”
This is slightly out of date; California is now the world’s fourth largest economy as of April 2025, passing Japan. I assert the data shows the state does not have a job creation issue.
These 18,000 are most likely employed somewhere else at 20-25% wage increase. Note that a different study didn't see a rise in unemployment: https://www.nbcbayarea.com/investigations/california-minimum... which means that these people affected actually got a better living standard.
This group is well known for bias, over and over through the years. Nothing they report should be taken at face value.
"A considerable amount of financial support for the Center comes from labor unions: According to federal reports, over the last 15 years it has received nearly $1.2 million in labor funding."
"The IRLE’s highest-profile researcher is Michael Reich, who co-chairs its Center on Wage and Employment Dynamics. Reich made a name for himself at a young age co-founding the Union for Radical Political Economics, with the stated goal of supporting “public ownership of production and a government-planned economy.”"
The current demo shows a simplified view, but the tool can handle much more granular relationships. I have some glioblastoma and pancreatic cancer networks with protein-protein interactions, phosphorylation events, and pathway cross-talk that show the lower-level detail. The challenge is balancing accessibility with scientific rigor.
Sometimes. I just fed the huggingface demo an image containing some rather improbable details [1] and it OCRed "Page 1000000000000" with one extra trailing zero.
Honestly I was expecting the opposite - a repetition penalty to kick in having repeated zero too many times, resulting in too few zeros - but apparently not. So you might want to steer clear of this model if your document has a trillion pages.
Other than that, it did a solid job - I've certainly seen worse attempts to OCR a table.
We[1] Create "Units of Thought" from PDF's and then work with those for further discovery where a "Unit of Thought" is any paragraph, title, note heading - something that stands on its own semantically. We then create a hierarchy of objects from that pdf in the database for search and conceptual search - all at scale.
I'm tempted to try it. My use case right now is a set of documents which are annual financial and statutory disclosures of a large institution. Every year they are formatted / organized slightly differently which makes it enormously tedious to manually find and compare the same basic section from one year to another, but they are consistent enough to recognize analogous sections from different years due to often reusing verbatim quotes or highly specific key words each time.
What I really want to do is take all these docs and just reorder all the content such that I can look at page n (or section whatever) scrolling down and compare it between different years by scrolling horizontally. Ideally with changes from one year to the next highlighted.
Yep, they completely missed the boat. They tried to use concepts without actually modeling concepts, making a huge mess of contradicting statements which actually didn't model the world. Using a word in a statement does not a concept make!
Intransitive preferences is well known to experimental economists, but a hard pill to swallow for many, as it destroys a lot of algorithms (which depends on that) and require more robust tools like https://en.wikipedia.org/wiki/Paraconsistent_logic
I think it's similar to how many dislike the non-deterministic output of LLM: when you use statistical tools, a non-deterministic output is a VERY nice feature to explore conceptual spaces with abductive reasoning: https://en.wikipedia.org/wiki/Abductive_reasoning
It's a tool I was using at a previous company, mixing LLMs, statistics and formal tools. I'm surprised there aren't more startups mixing LLM with z3 or even just prolog.
Thanks for the links, the "tradeoff" aspect of paraconsistent logic is interesting. I think one way to achieve consensus with your debate partner might be to consider that the language rep is "just" a nondeterministic decompression of "the facts". I'm primed to agree with you but
if conceptual thinking is manipulating abstract concepts after having been given concrete particulars, I'd say it relies heavily upon projection, which, as generalised "K" (from SKI), sounds awfully like calculation.
Here is why I think Gibson could in principle still be right (without necessarily summoning religious feelings)
[if we disregard that he said "concepts are key" -- though we can be yet more charitable and assume that he doesn't accept (median) human-level intelligence as the final boss]
Para-doxxing ">" Under-standing
(I haven't thought this through, just vibe-calculating, as it were, having pondered the necessity of concrete particulars for a split-second)
(More on that "sophistiKated" aspect of "projeKtion": turns out not to be as idiosynKratic as I'd presumed, but I traded bandwidth for immediacy here, so I'll let GP explain why that's interesting, if he indeed finds it is :)
Wolfram (selfstyled heir to Leibniz/Galois) seems to be serving himself a fronthanded compliment:
A more generous take on the previous post is that the dominant paradigm of Math (consistent logic, which depends on many things like transitive preference) is wrong, and that another type of Math could work.
If you look at the slide, the subtree of correct answers exists, what's missing is just a way to make them more prevalent instead of less.
Personally, I think LeCun is just leaping to the wrong conclusion because he's sticking to the wrong tools for the job.
My point is no type of math will work to model reason. Math is one of the many tools of reason, it is not the basis for reason. This is a very common error.
A less generous take would be that humans are also stoichastic parrots that can't help themselves but say something when they see a trigger word like math, Trump, transgender, or abortion.
Under the hood would reveal that we think in terms of concepts, attributes, space-time experience etc, and that language is just a means of serializing that conceptual understanding. We do not think in language.